Skip to main content
March 17, 2025

Transforming The Future: ContentOps In The Age Of AI (webinar)

In this episode of our Let’s Talk ContentOps webinar series, Scott Abel, The Content Wrangler himself, talks about the future of content operations in the age of artificial intelligence. You may know Scott from his work as a consultant, conference presenter, and talk show host, but in this session, we turn the spotlight back on Scott and ask him what HE thinks about the future of content ops.

Viewers will learn how AI is reshaping content operations, including:

  • Creating seamless system connectivity
  • Transforming content creation, management, and delivery
  • Changing how platforms for professional content creators work

 

Resources

LinkedIn

Transcript: 

Christine Cuellar: Hey there, and welcome to today’s episode, Transforming the Future: Content Ops in the Age of AI. This webinar is part of our Let’s Talk content ops webinar series hosted by Sarah O’Keefe, the Founder and CEO of Scriptorium. And today we have the Content Wrangler himself. Scott Abel is our guest on the show. Scott’s a great moderator. He created this show and so many great webinars, and we’re looking forward to shining the spotlight on him today to get his expert take on content ops and AI. So without further ado, talking about content operations, I’m going to pass things over to Sarah and Scott to get today’s topic started. Sarah, over to you.

Sarah O’Keefe: Thanks, Christine. Hey, Scott, how are you doing?

Scott Abel: I’m good. Can you hear me?

Sarah O’Keefe: Yes. We hear you.

Scott Abel: All right. I wasn’t talking on mute.

Sarah O’Keefe: And we are good to go. Yeah, we’re off to a good start. Nobody’s muted. And this was a fun thing that came up, because what I really wanted to do today was for those of you who don’t know, I have sat on many, many, many, many panels with Scott, usually hosted by Scott. And he comes up with these great questions and he asks the panel these great questions, and we all sit there going, “Umm.” So Scott.

Scott Abel: Oh, well. There we go.

Sarah O’Keefe: Welcome.

Scott Abel: Welcome. Hi, my friend.

Sarah O’Keefe: And this is going.

Scott Abel: All right. We’re starting.

Sarah O’Keefe: Yep, yep. It’s going to be fun. Now, I did realize I can’t be too awful about it because in fact, we’re doing another webinar next week where Scott is once again hosting.

Scott Abel: Nice.

Sarah O’Keefe: So yeah, I have to be nice. So, okay, so tell us the short version, I mean the extremely short version of who you are and where you are. And then, I want to ask you about the industry and where the industry is, and what you’re seeing from your life in the industry.

Scott Abel: Okay, great. Come here. Come on.

Sarah O’Keefe: Oh, we have dogs. Yes.

Scott Abel: First, I’m a dog dad.

Sarah O’Keefe: First off.

Scott Abel: This is Pavo, one of the three dogs that I’m currently with today. I am a Content Strategist, and my history is that I started as a Technical Writer and then I helped a bigger company try to figure out how to produce content at scale, which was a totally different thing than I had ever experienced before. Over time, I’ve become proficient at that and worked as a consultant, had my own company called The Content Wrangler, which started as a consultancy, providing billable, hourly advice to companies. And I kind of segued that career to be a content strategy evangelist. And now I’m working with a company called Heretto, which is the sponsor of this webinar series, to help them help other people in business understand the value of content and why it needs to be managed effectively and efficiently.

Sarah O’Keefe: Yeah. And so, what’s the summary of the last year? What’s happening in the industry? What are you seeing from your point of view?

Scott Abel: I would say it’s a big hot mess, pretty much. I think it’s a lot of excitement. People are excited, and that excitement might not always be good. Some people are excited, scared excited, like, “Oh, maybe not.” Other people I think are delighted. And it’s all because of AI, right? We know that this topic is pervasive. Everywhere we go, it’s kind of seeping in. I was at a bowling alley the other day, I just needed to pick up a friend who was at a bowling league, and there was a sign outside about some AI-powered whatever that was a bowling thing. So clearly, it’s escaped content and it’s now in the bowling alleys. So I think that’s the main driver right now. And with all the investment money going in and the uncertainty in the world, I think we’ve got this opportunity to operationalize everything that we do and look for ways to treat our content like a content factory. And I think that’s where content ops kind of plays a role.

Sarah O’Keefe: Yeah. So with AI everywhere.

Scott Abel: Yeah, it is everywhere.

Sarah O’Keefe: Bowling is excellent.

Scott Abel: Bowling alleys, right?

Sarah O’Keefe: What does that mean for us? What does it mean to have AI? And that is in fact, the poll we’re asking. And right now it looks like about 50%. Well, okay, nobody thinks the effect of AI will be minimal. Not a lot of people think there will be some change only. And everybody else is on the, “It’s going to be somewhere between a moderate amount to a lot to all of it.” But what do you think is going to happen? So looking at AI and where it’s going in terms of content ops, operationalization, automation, and all these other fun things, what do you think is going to happen in the near term, so say 12 months, but also three to five years? What’s that going to look like?

Scott Abel: My crystal ball is cracked, people know that, so bear with me there. It could technically be a little off. But my thought is it’s just going to revolutionize everything. Every single thing that you could possibly look at to optimize, you could use generative AI to help you think about how to do that. And a great example is content people. When you’re a content consultant, you usually start off listening to somebody who says they have a problem, and you try to ascertain what it is that they think the problem is. And then, you explain to them the process that you would go through to determine what you think the problem actually is and how they might go about solving it. And in order to do that, you do a thing called a content inventory, where we collect all the content that we know about and we keep track of it somewhere so we can do an analysis of it. And it seems to me that content operations are all the little steps that are involved to do that. And so, why wouldn’t we use AI to rethink all of those little steps that are involved? And how will we do a task analysis that would be similar to a content analysis and inventory all the things that we do in order to make content, and then decide which of those things can be automated, which of those things should be, which is different than could be, right? You can do something, but should you do it? And if you are going to do it, how will you go about doing it? And what things will go away that you won’t need to do manually anymore, and what is the value of the automated process you put in place? So I kind of feel like it’s going to revolutionize how we think about it. And I want to say that the most advanced thinkers in the content space are not going to be worried about the same things we were worried about five or six years ago. They’re going to chug along and try to figure out how to use these new techniques and tools to optimize how they produce content. And that’s not going to be just about generating new words from some LLM, right? It will be about being very precise about exactly how we’re going to do things, why we’re going to standardize it, why we’re not going to standardize some other things, and then how do we make all these things interoperable? And I think that’s the key word there. We’re going to be the keepers of interoperability. The more that we think about our content and the more intentional we are about how we design it, I think will lead to opportunities to showcase the value that technical writers and other content professionals bring outside of just writing the words. We understand a lot of the minutia that’s behind content. And if we can help our systems take advantage of that with this AI capability, I think it’s going to revolutionize who gets to the home run first, right? Who is going to beat the competition because they’re capable? So I think it’s really about capability development and it’s going to change everything that we do.

Sarah O’Keefe: Yeah. And I think we look a lot at the question of technical debt and content debt and just looking at the really, really low-hanging fruit, right? Everybody knows you should be doing alt text and almost nobody’s actually doing it.

Scott Abel: Yeah.

Sarah O’Keefe: Everybody knows we should be doing little short description, abstract kind of things to summarize. Well, as it turns out, those tedious, annoying, time-consuming, and ultimately sort of need to be done, check-off tasks, those two specifically actually lend themselves quite well to being done by AI or being done by the AI to 90 or 95%, and then, we go in behind it and kind of just validate that it did it right. And so, what is out there that we can get rid of, right, that is tedious, annoying, and pattern-based, and therefore can be automated so that, and this is of course the next question, right? The number one question that people are asking about AI is, “Okay, so are all the tech writers losing their jobs? Am I going to lose my job if I’m a tech writer?” And what do you tell them?

Scott Abel: I don’t think it’s about losing your jobs, I think it’s about whether companies value what it is that you do. So if they feel like the value of what you do is just generating a bunch of words and they perceive that a machine can generate the same words, and I guess in the same value, then you’re going to lose your job, right? But those are probably going to be lessons learned by those big companies or small companies even who try to do that, because there’s so many uncertainties about releasing the beast, so to speak, right? Having AI just do things for writers. I think the writers who understand what the companies are trying to do, and they map all their activities to helping the company achieve their goals, are going to find that their content will be seen in a way it hasn’t been seen before. And we’ve been arguing that there’s a value for content, right? There’s a value to content that helps content customers feel loyal to a brand. How do you put a price on that? It’s squishy. But if we can start to operationalize everything and use these tools, we can determine whether the effort we put in, how much time it took us and what that time was worth, was worth the capability that we developed. Did we get what we wanted at the end? For so many years, we’ve been talking about the inability to measure performance of our content. And I think this technology and the way that we create content in more advanced shops lends itself to being able to count now and be able to quantify the value of what we’re doing. So I really think that’s a big change that will change the way people’s jobs are. And the value will be the companies that see the capabilities coming from the techcomm team will find reason to keep them, right, as opposed to trying to figure out how to replace them. And I still think there’ll be poor decisions made by some companies, and there’ll be the example that we talk about at conferences and future panel discussions. But I think we’re going to see some good stuff and some bad stuff at the same time.

Sarah O’Keefe: Yeah. So I do want to jump in with a couple of the questions that are coming in through the chat because they are quite pertinent to all of this. But first, so on this poll, we asked, “What will be the impact of AI in content ops?” And we gave you a sort of one-to-five scale from minimal to everything. Nobody said minimal, so 0% said minimal. Only 6% said, “Everything, everywhere, all at once.” But everybody else, well, there were a few, 4%-ish, we’re in that 2, “There will be some changes, but nothing too drastic.” And then, we have a tie with 44% each for, “A moderate amount of change,” and “It’s going to change almost everything.” So pretty clearly, the group that’s on this call at least is seeing this lots of change shading into where you are, which is it’s going to change everything, I think.

Now, in terms of the questions, there’s a big picture question here about generative AI. If you’re using that for content operations, is it required to have “mature processes,” which I note is in quotes, “mature processes,” before you begin applying AI to it?

Scott Abel: Yeah, that would be super smart.

Sarah O’Keefe: But is it required?

Scott Abel: Of course, it’s not required because you can do a shitty job with content operations. So you can try to do it in any old way you want to do it, and you could be less successful than maybe somebody else, or maybe you can be successful enough. Some companies are aiming at mediocrity. They’re not trying to be perfect or exceptional. So I think it depends on what you want to say about that. Tell me.

Sarah O’Keefe: Yeah. So looking at the person that asked the question and the company that they are coming from, which we will not be disclosing today, they are in the healthcare space.

Scott Abel: Yeah. Yeah. Okay.

Sarah O’Keefe: Mm-hmm. Yeah.

Scott Abel: So yes, it should be required in your industry, because mature processes also means mature governance usually. And governance is about executing against your operational plans and making sure that they follow the rules that you’ve set forth so that you can prove that you are achieving the things that you say you’re going to do. And also, so that interoperability is possible, right? With the standardization and interoperability, and then you govern how people do the content, you can be more closely assured that your content will be correct in the end. So I do think there’s a huge role for mature processes. And the companies that are higher up on the maturity scale, for example, are probably going to have an easier time at it, all things considered.

Sarah O’Keefe: Yeah. So basically, if your processes are in reasonable shape, if you have content ops that are in good shape, that are mature, and therefore your content is better, applying Gen AI to that will have better outcomes. It’s interesting to me, because really the question is do I make the machine smarter or do I feed better stuff into a dumber machine? Right? If your content going in isn’t good, you have to do more work inside the machine, inside the Gen AI process, to make sure that what comes out is better. So it’s kind of like do you put in really good ingredients, or do you spend a long time finagling it in the middle? That to me is kind of the question. Now related to that, somebody’s asking the real question, which is, and I’ll just quote this directly, “When is the job market going to rebound from the devastation that AI wrought on the market? When will companies that fired their tech writing teams, because quote, we have quotes again, ‘AI can do it,’ realize that they need to rehire writers?”

Scott Abel: Oh, if only I knew the answer to that, I wouldn’t be on this show. I’d be doing something else making tons of money off that. I have no idea when they will recognize it. If I had to guess, I would say probably it’s going to take an individual bad experience that gets publicized heavily and probably damages the stock price of a company for somebody to see it really badly. And that’s only in the severest situations. I do think there’s a lot of room for having mediocre content for a while. There are some companies that actually, it’s their strategy to have basically crappy support content. And that’s a whole nother show about why companies intentionally design sucky experiences, and there’s evidence that they do. And it’s for profit reasons. So I’m not sure what’s going to trigger a rebound, and I don’t even know if that’s even fair. I don’t even know if there will be a rebound. Maybe it’ll be a realignment, because I really do think that the job is going to be different in the future. It’s not always going to be what we think it is. We’re probably going to have new roles. For example, why wouldn’t we be AI workflow specialists? We could analyze all the individual components of producing a content factory and being able to output what we need with creation management and delivery capabilities. And all of those things are workflow. So we’re going to need somebody who’s savvy about weaving the workflow together if we’re going to operationalize it. And then, they’re going to also need to be savvy about AI tools, which means that your knowledge of FrameMaker is pretty useless right about now, right? It doesn’t matter anymore if that’s your specialty. So I think if you’re going to look for opportunities in the technical communication field, it may be growing your career outside of what it is you were normally doing by adopting some of these AI strategies to help companies do it. Because we know they’re going to try to use them, right? We know they’re going to try to optimize the amount of money they can make and reduce the amount of head count that they have. And they’re not aiming it at tech writers. There’s no evil person saying, “Let’s get rid of all the tech writers.” It’s really looking at any way they can save revenue, right, and use it in a different way so that they can reward shareholders. And as long as we know that, I think we can align our skillset and our capabilities to help them do whatever it is they want to do. But we have to shift our thinking. It can’t always be the whiny story about tech writers being fired. The reality is tech writing job is changing, but every other job is changing. All the people in my life who never want to talk about anything about content, all know about AI, and they’re all freaking out. I’m talking about desk clerks at hotels, people that work at a barbershop, people that work at the Treasury Department, for obvious reasons. I’ve heard these stories recently, and it’s not just limited to techcomm. So I think we can expect to see something happen, but who knows?

Sarah O’Keefe: Yeah. And I think the thing that I keep saying is that if the content that you produce as a writer is indistinguishable from what the AI is producing in the sense that it is so rote and so pattern-based, and so everything, well then the AI probably can do it. Now, is it going to be correct or not is kind of a different question. And then, you go down the road of does it matter? Right? Does it matter if the content is wrong? Well, sometimes it matters a lot and sometimes it doesn’t. Sometimes you’re documenting a video game in a Wiki, and it’ll get fixed. It’s just not that big a deal. The video game players will murder you, but literally, on screen. But you kind of go down that road. But I think that we have all seen not just mediocre, but terrible, terrible technical writing.

Scott Abel: Right. And it wasn’t the AI that made it terrible. Right?

Sarah O’Keefe: Right. And so, if you’re creating mediocre content, you’re probably in trouble. The other thing I’ll say is that if you look at the marketing side of the world and marcomm content, they for the most part do not have what I would describe as that gate or that moat that is, “This has to be accurate or we’re in trouble with compliance.”

Scott Abel: That’s right.

Sarah O’Keefe: They don’t typically have that. In some spaces they do, but for the most part not. And they have gotten very much disrupted in terms of what it looks like to be a copywriter on the marketing side of the world. So I think it’s worth looking at that.

Scott Abel: I also wonder if we should look at the fact that it’s not always about us writing stuff now. Remember, it’s called generative AI. So the system needs to generate something if we’re going to use generative AI. And we need to be able to train the system, maintain the system, control the system, and I mean we as in human beings who are responsible for that system, not necessarily a tech writer. But if we are knowledgeable about content, pardon me, and able to share what we know with other people across our organization, we can be seen as more valuable. I’ll give you a great example. So in my work with Heretto, I am helping them communicate, right, is basically what I’m doing. And one of the things I recognized was this AI capability is what we’ve been talking about, you and I Sarah, and others in our industry, especially thought leaders and entrepreneurs. We’ve been talking about the need to separate content from its formatting, and we’ve given all these many reasons. And one of the most important reasons we always give is because you want to be able to separate your content from its formatting so you can deliver the content independent of its formatting, so it can be formatted at the delivery point. And then, we tell people, because there will be delivery channels in the future that you do not predict and you want to be prepared and capable to deliver to them. And guess what? An automated, interactive digital human, somebody that looks like me, that is not me, can immediately be cloned and trained to deliver content. But that content needs to be prepared so it can be delivered there. We do not need another one-off project where now we create content for only for the bot and only for this and only for that. If we create it the way we have been, single source publishing, right, using standards so that we can make the content interoperable so that the machines can process it, pass it back and forth, and do all the things we need without us, that creates value for us if we understand how those systems are put together and if we’re the ones helping to create them and maintain them. So at Heretto, for example, I introduced this idea of using a virtual human to deliver some content. And why? Because I shouldn’t be delivering it. I’m the bottleneck. If I have to do the research and if I have to deliver the messaging, I can’t be doing something else. But if I can get a bot to deliver the exact same information because I can control it, I’m not talking about letting a chatbot just make up stuff, I’m talking about if you can control it, and there are ways to do that, you can make a tool that has utility for your company. So I took my technical communication knowledge and I built something that helps the company do something totally different that has nothing to do with technical documentation. But it’s my knowledge of technical documentation and content and these systems that allowed me to build something like that. I’m not a programmer, I’m not a coder. I don’t need to be. You have to be thinking operationally. And if you can apply your techcomm thinking to your company’s problems, you might be able to both improve techcomm content operations and help the company do other things that are valuable.

Sarah O’Keefe: Okay. So let’s break this down a little bit and talk about what it looks like to apply AI at various levels in the process, starting, I guess on the back end, sort of on the authoring back end.

Scott Abel: Yeah.

Sarah O’Keefe: So if I’m sitting there and I need to create content or we need new content, let’s not say that I need to create it, what are the use cases that you envision there? You’ve talked about this a little bit, but starting at the back end, I’m staring at a blank page. What kinds of things can I do with the AI to get going from there?

Scott Abel: Yeah. I think it depends on your situation, of course. But if we rewind back to what I was talking about earlier where I said I think it’s important for us to do an inventory of all the tasks that we do in order to create content. This means micro inventory, way down to the componentized level of tasks, right? Saying that you write a topic is incomplete information. It doesn’t provide me with sufficient information to know exactly what you’re doing. I need to know all the steps that are involved. And there are so many steps involved in technical communication or creating content of any kind really. There’s research, there’s drafting things, there’s getting things approved, there’s checking it. There’s making sure it complies with other rules, there’s sharing it with other people. There are so many different things, and we’ve invented all these little one-off ways to do this stuff because it was convenient and we could. And so, now those things are breaking because you can’t optimize and automate all the things that we’ve invented. So I really feel like where we’re at is thinking that, thinking that way. How does the technical communicator who’s creating content use the tools to do the things that you might want to do? I am going to be doing a presentation at the ConVEx conference where I will talk about some of those things. So I’m not going to preview them all right here, but I’ll tell you that there are lots of rote tasks that we do that could be automated and built into like a common toolkit. That’s one of our problems too, is that we’re constantly jumping from system to system. The docs as code people love this because they can weave a bunch of tools together, but now the responsibility is to keep them woven together and to keep them functioning properly. And understanding all the minutiae of every task means that if one thing breaks here, we know something will break down here. If you don’t have that knowledge of the granularity of all the tasks, just like you don’t have the knowledge of all the granularity of your content, you can’t deliver as precise a service as you can if you knew otherwise. So I really do think it’s mimicking the things that we’re doing for content, but doing it for the content production and creation process. And then, you can take that and extrapolate it and do it for content management and then for content delivery. What are the things that can be automated that, as you said, are repeatable, scalable, and machine processable, things that machines could do if we only taught them the right way to do it?

Sarah O’Keefe: Yeah, and I think one of the really interesting points to me is when we look at generative AI, people say, “Oh, I’m going to create net new content and it’s going to be fantastic.”

Scott Abel: Right.

Sarah O’Keefe: That’s actually the most difficult thing to do with gen AI is to create net new. It doesn’t really work that way.

Scott Abel: No.

Sarah O’Keefe: It is taking what you have and distilling it down. And you said this a few minutes ago, if you think about AI not as a create new, but rather as a quality checker for what you have, does it conform? Does it follow the patterns? Not, “Hey, AI, make some new stuff” rather, “Hey, AI, look at what I have and tell me if it’s good.” Right? “Tell me if it follows the rules. Find the places where it doesn’t follow the rules.” Those kinds of things. So that’s kind of the back end where I think broadly I see this as, to your point, a tool similar to a spell checker, right? I don’t write content without spell checking it, and you could do the same thing with this type of thing, similar to validation. Is my XML valid or not? Does it follow the required structure? Those are things that we can automate and we can do them today. And you sort of extend that to the AI concept. Okay. So we go through this process and then we deliver the content. And we’ve talked a little bit about chatbots on the front end, right, on the end user end where they’re requesting content and getting information from the chatbot. But talk to a little bit about AI and performance metrics. How might you apply AI to the delivered content to uncover what’s going on in there?

Scott Abel: Yeah. One great example is if you had an AI system deliver the content, so a chatbot or interactive virtual human, it’s just a delivery channel, right? We see it as something more because it looks like us or it mimics a human conversation, but it’s really just a delivery channel. And in order to deliver at scale, we have to have standardized content that’s interoperable, right, that’s going to be able to be switched back and forth automatically without our help. That’s the whole goal. And so, I think we’re going to see kind of a world of gen AI-powered, let’s say QA systems. They’d be capable of real-time verification and error detection, right? So we want to future-proof our content operations processes by embedding automated checks within the content workflows for things like style, tone, bias, I don’t know, accessibility, factual integrity. And if we have these content validation tools that are integrated into our content management platforms, they can flag errors and inconsistencies before we ever publish them. So that eliminates that you have to go find out that something’s wrong and then go back and fix it. The machine can be very good at doing the things we can’t, spotting an error on page 49 or later in the documentation that is incongruent with something 50,000 words later or 16 webpages or 15 chapters in the book or whatever. It can do that so easily and help us with quality that I really do feel like the quality checking and the maybe even error reduction possibilities are amazing. And that can help reduce cost for rework, also for retranslation or other kinds of things that happen afterwards. But you can also train your AI to learn industry-specific rules. You were talking about how some compliance-oriented organizations have tighter rules or compliance needs than others, and that they’re stricter. So you can enhance the ability or the strictness of your system to spot domain-specific inaccuracy like legal disclaimers, medical terminology, things that are specific to an industry sector or a region or a geography of some kind. And then, of course, you can enhance the need for human oversight in those high-stakes or highly-regulated industries. And the AI can push the edge cases to the humans and say, “I cannot make a decision about this based on the rules that you’ve taught me. I think a person needs to think about this particular thing.” And if you take it one step further, think about the fact that these AI systems are also remembering what the person is inputting or the machine is inputting when it’s having a conversation with it, which means that it will be able to tell you at the end of the day the things that it was not able to answer because it does not have facts in its database. So when you control where the content comes from, the LLM can’t just hallucinate some stuff from the internet that it learned from who knows where. So I think we’re going to have a QA role that’s super important there. Does that answer your question at all?

Sarah O’Keefe: Yeah, I think so. And it reminds me, I was talking to some people in finance who do actual audits, right? Not content audits, but in the sense of-

Scott Abel: Yeah, audit audit.

Sarah O’Keefe: Audit audit. And they said, “What we’re going to do with AI…” Traditionally, if your company is large and publicly traded and blah, blah, you go through these annual audits and they’re kind of a big deal. Well, they’re still going to do that. But what they’re doing is they are writing AI frameworks that will go in and look at all the finances of this mega-corp, right?

Scott Abel: Yeah.

Sarah O’Keefe: And they are going to work through exactly what you just described, go through all these numbers and all this data and all this information, and find the things that don’t quite match up, flag the things that are inconsistent. This is traditionally what you would do as a freshly minted CPA working for a large accounting firm. You would go in there and you would spend your first year or two or five doing this very tedious look at every single page and uncover these inconsistencies the hard way. And now they’re saying, “Well, you know what? Throw the AI at it. Let it do that first pass and say, ‘Hey, I see some stuff here and here and here.'” It’s not going to replace the need for humans, but it’s going to do that initial pass of looking for the things that aren’t quite right, and then go from there into the actual audit, the actual work. But using it to, I think that idea that you can use AI for quality checking is kind of underappreciated. We talk so much about the quality of the AI output, right? And this is like how do we use AI to fix the quality of, I guess, the input, right?

Scott Abel: Yeah. But if you ask it a simple question like, “Could you identify places in this document where the content seems similar but may be different in a significant way?” and then define what significant way is. The system can help you spot those things really quickly. But I even thought of another idea just now. So let’s assume that viewers of the show are in a publicly traded company. And we’ve said in the past on panels where we were asked, “How would you decide what it is to tell your bosses if you want to convince them that you should be able to invest some money or some time and resources into producing content at scale, for example?” So you need to be able to align your messaging with what the company leaders want to accomplish. So what if you could have the AI look at your company’s public information that it provides to its shareholders and to the Securities and Exchange Commission in its annual report, where they often say what they intend to do with the investment money that they receive that year in order to improve the company. So it’s not unusual in a public disclosure like that for shareholders to learn that the president of the company is aware that there’s a customer experience problem, and so, “Therefore, we’re going to invest 25% of all new expenditures trying to increase customer experience and reduce churn” or something like that. So now you know what the company wants to do. You can ask the AI to align your idea for your technical content improvement project to the company’s goals in accordance with the documents that they publish for the public to know what it is they’re supposed to be doing and why they’re doing what they’re doing. So you would align your messaging with that, and the AI can make sure that everything you suggest to your boss aligns to some point that they care about and could even link to the place in the annual report to make it super easy for the boss to see the value that you’re bringing where you’re saying, “Hey, I’m aligning exactly what we’re doing with what you’ve told the public you are trying to achieve as the leader of the company.” That would be super easy and super fast for it to do. You and I could do that, Sarah, without AI’s assistance. But we would have to go find the annual report, read the annual report, make some decisions about it, write a whole bunch of stuff down, map up our ideas, validate whether that’s true or not, blah, blah, blah. And the AI could help us do that with really record speed. And I think it would help us make better arguments that management care about, instead of us going in and complaining about, “We hate our tools and can you give us some money?”

Sarah O’Keefe: Yeah. I have in fact done the trolling through the annual reports to figure out what’s going on.

Scott Abel: Yeah. It is interesting, isn’t it?

Sarah O’Keefe: It’s super useful. I don’t know if this is quite a related question, but it kind of builds on this, asking about some of the pattern-based stuff. And setting aside the compliance issues, so assuming a not-compliance situation company, the question here is, this person said, “I’ve also attended conference sessions where folks talk about only documenting the top 20% of tasks the users do and letting the rest go. Where would you focus the AI in a situation like that?”

Scott Abel: Oh, I don’t know the answer to that question. I don’t know. I haven’t thought about that. I think off the top of my head, I would say I probably wouldn’t do that project. I would probably find something else to do, because it doesn’t seem like it’s going to succeed. And let me throw a different scenario at you and see where this lands. So a software company that creates API documentation, reference materials, put an LLM in front of the set of API reference documentation, and then they asked some developers to use it, and then they asked people like me to watch them use it. So we were doing basically a usability test, watching them and asking them why they were doing what they were doing when they were doing it. What did they do? They searched for parameter, and the documentation is reference material. It has a section called parameters. It pulled the parameter up and it gave the parameter to the developer immediately. That content was in the original data set, so the LLM could find it, and it was instructed to use that data set as the truth, the sort of source of truth. “Don’t be making it up from the internet, learn it from here and tell us what the answer is.” So then, the next question was, “What if I don’t do that?” Well, guess what? In the reference documentation, there’s no what if documentation. There’s no content in there that says, “What if you do this or why if you do this or why if you don’t do this.” It’s not in there. So if you tell the LLM, “You cannot use the creativity of the internet to hallucinate,” then you must provide all of the answers to all the questions. And so, in a set of technical documentation that does not have why information or it only has how or reference information, you’re not going to be able to answer all the questions. And so, the mediocrity is in the way that we designed it. It’s not in the content itself. It’s not that we only did 20 topics, and therefore we avoided the other ones. The system will generate bogus answers if you allow it to and if you don’t feed it the correct answers. But what if at the end of the day, the AI could tell you all the things it was unable to answer because those facts weren’t in your database? And then, you could go back and add that to it and then redo that test with those same questions and see if the AI can answer them correctly. I think there’s something there.

Sarah O’Keefe:  Yeah, that’s interesting. And I think a couple of other things, I have actually seen this done in a pre-AI world where people said, “You know what? We’re just going to address the top questions and then we’ll keep adding to it as we have time.” So first of all, how do you know which are the top 20%? Is it your top 20%? Is it your users? And we’re right back to how good is your data? Right? How much do you know about what questions they’re asking? And then, the other thing I’ll say is that technical documentation in general, along with learning content and support, fall into the bucket of enabling content. The job of techcomm is to enable a person to do the work that they’re actually trying to do, right? So to your point, when they’re looking up parameters, their job is not, “Look up a parameter.” Their job is, “Write some code and I need that parameter.” Or their job is, “Write code that does a certain thing. And in order to do that, I need to understand your API.”

Scott Abel: Right.

Sarah O’Keefe: My job is not look up things in the API. My job is get the answer. And so, as a technical content person, your job is to actually provide the answers, right, to all the questions that you don’t know people are going to ask.

Scott Abel: Right.

Sarah O’Keefe: So while I can make a case for identify the top 20%, do those first, and then add things on, I would very much want to have a tail end on that, that is, to your point, Scott, looking at all the failed searches and adding that information as you go. I would also ask some obnoxious questions about what are the consequences of people not finding the content? Because in consumer products, the consequences of people not finding what they need to enable them to use the product successfully usually is that they return the product, which is your best case scenario. And your worst case scenario is that they keep it and they talk smack about it to all of their friends.

Scott Abel: Yeah.

Sarah O’Keefe: So I’m kind of with you, and I’m not sure I like this project, where we’re just going to write off because it’s 20%. Great. It solves 80% of the problems, leaving you with 20% of the problems that need to be in that other 80% of the sort of long tail content.

Scott Abel: If you’re a technical writer and you feel like you must comply with wherever it is that you work, and they have a bad idea and maybe that you don’t agree with it. The bad idea is, “We’re only going to create 20 topics and then we’ll figure out what the rest of them are.” That’s great. You could create a hundred topics and you could have wasted time creating 80 of them nobody will ever visit because you wouldn’t know until after you have performance metrics. But have you ever done a survey? I’m not a professional survey designer, but I’ve ran lots of surveys and I’ve done survey analysis and written things about survey results. But one year I decided I wanted to have a different kind of survey. So I didn’t want everything to be multiple choice, so I opened up a couple of open-ended questions and gave the survey respondent a little text box they could type stuff into. I thought that would be great because it would be filled with useful information. Yes. And when 750 people fill out a spreadsheet and put useful information in it, it takes you an awful long time to figure out what does any of that mean. When you have a question that’s multiple choice and everybody picks one answer, like your polling questions, you can see immediately what the results are, how many people answered each thing. But what if the AI could crawl through all of your logs of all these failed searches and all these other things and make sense of all the comments that people leave? And because the comments are not standardized, right? Because it’s not standardized, you can’t run a keyword search to say like, “Who thought this sucked?” Right? But the AI could go through all the comments and then categorize which ones are probably leaning toward, “This is not a good experience,” and these, “I loved my experience.” And it could discern maybe some of the things that are wrong with your content and help you direct your efforts. Maybe it would help you create new topics that you didn’t include in the first 20 set of topics or rewrite some of the ones that you did because it failed to answer the questions in the way that people expected. I think those are all ploys that we could use the tools to do things for us that we would have to do manually that are just too time-consuming. Looking through a spreadsheet that’s not full of numbers is not a good use of your time.

Sarah O’Keefe: It’s not a fun time.

Scott Abel: It’s not fun and it’s not easy, right? And it’s not accurate. And the AI could do it a lot faster and then give you at least the gist of the data. And just think about, if you knew the gist and the gist is, “I’m going in the wrong direction,” well then good. You didn’t waste 18 hours trying to discern information that we captured in a spreadsheet because we decided it was OK to be mediocre and use a numbers-based tool to put words in. Right? It doesn’t make any sense to me when I think about it intellectually.

Sarah O’Keefe: Yeah. Well, I’ve said repeatedly that it turns out that the content management system with the largest market share in the world is Excel.

Scott Abel: Excel.

Sarah O’Keefe: Yeah. Excel. Okay. I refuse to do a presentation on AI where we don’t at least touch on bias.

Scott Abel: Oh, right.

Sarah O’Keefe: Yeah. So talk to me about bias in AI in whatever bucket makes the most sense to you.

Scott Abel: I think there’s a big concern about bias in AI. And the thing that I’ve recognized in my own learning about it is that first you have to understand bias before you understand bias in AI. So if you do a little bit of research and understand where bias comes from, that’s a human thing, that this is something that is natural for us. It would of course make sense that these systems are replicating all the stuff that they learn from us by copying our content and listening to our words and thoughts. I don’t know exactly where all this will land, but it just seems like bias is going to be there because it’s using our biased content in order to generate these words for us, so it’s going to pick up on bias. But why couldn’t we use a bias filter? If we could filter out other things, why can’t we filter out bias? Bias is a definable thing, right? I think people who understand it more than I do could probably help us define exactly what we’re looking for. And we could probably build bias detection functionality into our systems that would prevent us from doing that, just like it would prevent us from violating the style guide or violating a compliance order of some sort.

Sarah O’Keefe: Yeah. There’s some dumb examples of this that have been helpful to me in understanding what bias looks like and what happens when you apply machines to it. So if, for example, you ask an AI to generate a picture of a CEO, you will typically get men.

Scott Abel: Yeah.

Sarah O’Keefe: Well, most CEOs, at least in the US, are in fact men. And so, is that bias? It’s just directly reflecting what’s in the data set.

Scott Abel: Right.

Sarah O’Keefe: Now the data set has an issue, right?

Scott Abel: Yeah.

Sarah O’Keefe: And that’s what you have to really watch for, is that those assumptions are baked into the groundwater. And we’ve been talking a little bit about edge cases and how AI will find edge cases. Sometimes an edge case and bias, there was a project in the Netherlands where they were looking for welfare fraud. And what they did was they built an algorithm, some machine learning that looked at the data set of people that were applying for welfare. And the gist of it was that if you looked unusual, right, relative to the data set, then you got tagged as, “This person, we should look at this person more closely.” And what happened was that because the large majority of people applying for welfare were Dutch, born in Holland, right? That was kind of their okay data set. And then, the small percentage of people that were new to Holland, so they had come in as refugees and were applying for welfare. That was actually a very unusual case. And as a result, it got flagged as, “These people obviously need to be investigated because they are an edge case.” But they were an edge case because there were so few of them. And so, they looked like not the pattern. And it turned out when they went back and sampled the data, not using machine learning, the incidences of welfare fraud were actually percentage-wise, higher in the core, like the norm sample, than they were in these outliers. The outliers were defined as outliers because they didn’t fit the pattern. But they weren’t identified based on anything that was, “This is fraud.” It wasn’t their numbers that were problematic, it was actually their demographics being different from, again, the core or the norm or the expected, or whatever you want to call that. That’s a pretty good example of bias getting lifted through the algorithm, because the algorithm looks for like a nice flat pattern, and if it doesn’t see one, it goes “Ping” and it highlights that for you.

Scott Abel: Yeah. And I wonder if it’s also about how we train these models. For example, if we ensured that AI models were trained on diverse representative data sets, we could reduce some of the risk of these bias outputs. But as you pointed out, it’s also contextual. So for example, if you had a knowledge base that was designed for global audiences, you would want to train the AI models with localized data to ensure that cultural sensitivity and appropriate tone were used when you were communicating with people from those locales or those persona groups, whoever you’re targeting.And the benefit to you is that it reduces the risk of the outputs favoring a dominant culture, which is what you were trying to point out there, where the anomaly is the thing that is reinforcing the stereotype. It’s not the actual thing, it’s the data itself. And so, if we understood that a little bit better and we were able to incorporate data from the underrepresented groups, and I don’t know, diverse industry sectors that would be different than the average, then that varied educational backgrounds of the people that are probably reading the content, we could teach the AI model to deliver more precise or more individualized experiences that are valuable and that try to avoid the biases that are captured in the generic data. Just looking at the men issue is a perfect example. It’s so easy for the AI to assume that many of these roles are men because it’s probably what it was trained on. And the voices in AI voice generation software, they could do men at first easier because they had a whole bunch of male voices in there testing it out.

So I think bias is definitely one of those issues, bias, ethics, all those things are going to crop up, and those are things maybe that content operations will be aware of. But because we’re not looking to generate content all the time, we’re looking to automate our processes and streamline the production of content, the AI can actually do tasks for us that are not about copying somebody’s work or regenerating content that it doesn’t own. Instead, it’s about assembling the steps necessary to produce content with the least amount of waste and the most effective processes available that machines can run for us.

Sarah O’Keefe: Yeah. Okay. So folks on the show, this is your last call for questions, and we’ll try and get as many of those as we can in. I’ve got a bit of a backlog here, so I’m going to try and get through these.

Scott Abel: Ah, okay.

Sarah O’Keefe: So Scott, there’s a question here about documents that have multiple writers. “How can I use them to make the voice consistent throughout those documents?”

Scott Abel: I think you could do that a couple of ways. So AI-powered co-pilots or tools that help authors create, manage, and deliver the tasks necessary in order to make content for whatever company they work for, they can be used behind the scenes to help you do a variety of tasks that are not about writing. I think if you think through what’s going to happen in the industry, it probably isn’t a jump to think that AI capabilities will be weaved into the tools that we currently use or the tools that we’ll use in the future, which means that maybe a component content management system will not only be remembering topics for us and then allowing us to reuse them in a systematic way, but we’ll also be able to reuse the rules. Share the rules, share the prompts, share the generative AI capabilities that maybe one individual created. And once we learn to share them across and collaborate on them, I think we’ll see that each person writes a little differently. How can we get the tool to help unify our messaging all at once? Today you would have to take the content out of your system and then put it into another system and then copy it back into the other system, or have an API go back and forth. And the APIs are not all designed yet because every one of these AI software companies would have to develop integrations for all the different tools that are out there, and they’re just not mature enough to do that yet. So I do think there’s something about multiple authors and the authoring tool, copilot, the tool that helps the authors, would have to crawl across all the sets of content in order to do that. And most of them are being implemented cautiously by software companies who are trying to one step at a time introduce AI in a way that doesn’t mess up what they’re currently doing and they want to get it right. So I think it’s going to be a challenge for a little while, but I would expect our tools are eventually going to adapt to AI and have these capabilities built in and then allow for each tool to interchange that content between different systems.

Sarah O’Keefe: Yeah. One of the things that’s interesting to me is that a lot of the tools that actually have been doing this kind of work, have machine learning and AI built in, have sort of gotten overtaken by events. They’re saying, “Well, yeah, we’ve had this all along.” We have writing assistance tools and you can integrate them with a lot of the systems. And they do have AI under the covers. They just don’t necessarily say so. And so, it’s kind of interesting. Okay.

Scott Abel: And it’s not about generation either.

Sarah O’Keefe: No.

Scott Abel: Those tools are about validation and checking,

Sarah O’Keefe: Right. So we did ask about the focus of AI strategy for your product content, and is it productivity? Is it information access? Is it both or is it neither? It is 4% neither, 22% are saying productivity, 14% information access, and almost 60%, 59% said both. So that’s a pretty strong and interesting kind of use case that we’re going to look at. Okay. Now I have another question here. This is tying back to where we started, which is AI and whether we’re going to use it in our jobs; that if you write text that the machine can write, you’re going to lose your job, yes. And this commenter says something that I’m afraid I don’t agree with at all, which is, “I’d add almost all tech writers have blown past that kind of work years ago.” I’m going to say maybe the people on this call, maybe the people doing this kind of research, but I would not agree that all tech writers or a large percentage of tech writers have not blown past writing stuff that is not any better than what the AI can do. Let me put it that way. And a lot of that is people who have a tech writing job but don’t have the role. They have the assignment, but they are just being made to do it on the side. And they’re not really sort of in the space as professionals, it’s just something that got dumped on them. Okay. So moving past that comment, any newcomers start past that, we must be able to do this. The question is, “Does that mean that maybe very little of our work as it is now is going to be affected by AI? Is this as impactful or as important as Microsoft Word and not as important as dida?” So basically, if the AI is going to take on some stuff, but it’s kind of at that lower level and we’re already beyond that, then maybe it’s not such a big deal. What do you think?

Scott Abel: Maybe. You say it’s maybe not such a big deal. It’s also challenging because there are technical communicators in every level there, as you pointed out. Some of them are just, it’s like a sideline job for them because they’re a communicator and they, “Oh, make Tina in charge of that too.” Right? And that’s not really the same thing as having a technical documentation strategy that is aligned with your company’s taxonomy. That’s a much more complicated thing than just writing manuals. So I think there is some truth to the technical writers in the sector that do advanced information management. They’re creating XML content, they’ve been doing it for years. It’s structured, it’s interoperable, it’s machine-readable. They’re way up the food chain and those jobs are probably not going to be going away. I do think the complexity of the job is going to increase. The amount of knowledge you’re going to need to know in order to make systems interoperable and to make sure that everything is working and checking it and validating it and making sure that the quality is there is going to be our new job. I don’t think it’s going to be a lot of worry about placement of a serial comma. The machine can do that. You just make a rule that says, “Never, ever will there be a sentence without a serial comma. Follow this rule.” You can probably do that. What would you do if you’re the editor and you think your value is in being persnickety, that’s not really valuable right now. And the same thing for writing prose, right? You could think that you’re really good at writing prose, but once the machine knows your pattern, it can write that too. Do we want it to do that? Probably not. I think we want to try to get to where we couldn’t go before. So think about all the technical writers who you and I, Sarah, have met over the years who have said, “Oh, Sarah, Scott, I hear what you’re saying. My company will never do any of this, so I’m just going to sit here and type in Microsoft Word and cry.” Right? That’s probably going to change because tool vendors are going to start to be able to make new tool capabilities. We’re going to devise new ways to take all that unstructured content and move it over someplace else, and maybe new tools that will help us structure it faster, better, quicker, easier, cheaper. But I don’t think it’s going to be magical, and I still think tech writers will have a job. But the low-lying fruit tech writers who are just generating some, I don’t know, necessary evil documentation that the company says, “We don’t really value, but we have to produce.” I don’t expect if they value it, they care if your technical writer does it or a machine does it because they don’t value it. So there must be some connection to the value of the information and where we’re going to head in the future as far as jobs are concerned.

Sarah O’Keefe: All right. I have a doozy of a final question.

Scott Abel: All right.

Sarah O’Keefe: And you get one minute, which you might think this is a good thing when you hear this, because you might want to keep it short because wow. Okay. “Do you think you will be able to effectively ask an AI help system a question in a foreign language? The AI system will parse the English AI and then return the answer in the user’s language. In this way, translating documentation becomes no longer necessary.”

Scott Abel: Yes, I totally think you can do that. I think you can do that. I think that some companies will do that. I think that some companies will do it and it will be a hot mess because they won’t invest the time. Maybe they skip steps on everything. Maybe they’re not just skipping on documentation maturity, right? Maybe they’re skipping on a whole bunch of things, and if they skip, I think they’re going to find out that’s not going to be very pretty. Because translation is not about the exact matching, the fuzzy matching of the words, right? You’re going to have to actually feed it information and data about your actual customers, not the people that you think are the content consumers, but who are the real customers. And language is so nuanced. There are so many things about transcreation versus translation. So transcreation is kind of localizing the content for the people that you know are speaking that language, in the place that they’re speaking it, in the situation in which they exist, in the country they exist in, in the cultures they exist in. That’s a very specific thing. I think AI will be good at doing it in the future. I do not think it’s something that it’ll be really good at right now. I think it needs change.

Sarah O’Keefe: I think the premise here that everything gets mapped back to English, I think what’s actually more likely is the machine translate all the content and apply a local language AI to it in order to get your results instead of back translating everything. With that said, I’ll also point out that when DeepSeek came out a couple of weeks ago, there were immediately a couple of really, really interesting articles about the linguistic nuances that were introduced. Because ultimately, it looks as though DeepSeek is operating in Chinese, which has a different grammar and a different linguistic shape, so something to consider. Oh, and thank you to anonymous commenter who slides in under the wire saying, “We have a translation team who is testing this out with our content.” And then, I misread this to say, good at romance content, but what it actually says is “good at romance languages, horrible with Arabic.”

Scott Abel: Oh, okay. And that kind of makes sense too, because it’s probably the complexities of language is the right to left, left to right, character-based versus word-based. And that’s a lot. It’s a lot for humans to think about and to train the system properly in the cultural nuances of translating and trans-creating all that content. I think there’s a lot of possibility, but it’s probably going to be a long time before it gets to be perfect.

Sarah O’Keefe: Yeah. Okay. Well, with that, we are so out of time. Christine, I’m going to throw it back to you. She’s supposed to get five minutes to wrap up and she’s getting approximately four seconds.

Christine Cuellar: That’s okay. I can do it fast. Thank you all so much for being here, and please remember to rate and provide us feedback. Also, save the date for our next webinar, which is going to be April 30th. And our guest is going to be Christina Halverson. That’s going to be about how humans drive content ops, navigating culture personalities, and more. So be sure you’re there for that. And thank you so much for being here again, great to have you, and we hope you enjoy the rest of your day.

Sarah O’Keefe: Thanks.