10 Steps to Process Improvement with Generative AI

Contenu vidéo originalAgrandir la vidéo
  • Welcome and Introduction: Join the insightful webinar on "10 Steps to Process Improvement with Generative AI."
  • Host and Organization: Hosted by Tiffany Yakilino from IIBA, an independent association for business analysis professionals.
  • Speaker Introduction: Featuring Mark Belser, a knowledgeable engineer at Softed, with extensive experience in AI applications.
  • Webinar Highlights: Focus on generative AI's role in business process improvement and practical applications.
  • Q&A Session: Engage with audience questions to deepen understanding and insights.

We're going to jump in, we're going to get started. I'm going to officially welcome everybody. Hello everyone and welcome to today's webinar 10 Steps to Process Improvement with Generative AI. I am your host Tiffany Yakilino and I work on the marketing team here at IIBA. IIBA is an independent, not for profit professional association serving the global business analysis community.

We're a recognized thought leader dedicated to elevating the discipline of business analysis and we provide our community with relevant tools, resources and networking opportunities. So we did kick off with a little bit of housekeeping but I'll go through them again just so everybody, if you joined a couple moments late, if you have any questions during the presentation, please type them into the Q and A panel and we will answer your questions at the end of the session as time permits.

Today's webinar recording will be available for a limited time on IIBA's webinar archives page within a few business days of the broadcast. So with that I am going to move us over to an introduction to our speaker here. So I am here today with Mark Belser, a knowledge engineer at Softed. Mark is an accomplished educator and consultant with extensive experience in model-driven development, software testing, and agile methods. Mark has developed and taught two brand new courses, AI for Business Analysis and AI for Software Testing which are available worldwide through Softed.

His work spans all phases of software development from business analysis to modeling, coding, and testing. He leads webinars and panel discussions on practical applications of AI through the software lifecycle and helps clients accelerate their processes while maintaining order and efficiency.

So we are so glad to have you here today talking about probably the one of the hottest topics of the year last year keeps growing in interest, just all things AI. So without further ado, I'm going to pass it over to you Mark, just to kick off our conversation.

Oops. I will stop sharing and I will allow you to share. All right, well welcome everybody. So I'm glad to have everybody here. Thank you for joining us today. I was going to say this morning, but that's so west coast centric of me, you know, because for some of you it might be afternoon, some of you it might be evening, you know, all over the place.

Okay, so AI is a hot topic, no doubt about that. Let me go ahead and start by sharing, making sure that we get the correct screen shared. You know, would love to have an AI agent that would automatically manage these kinds of things for me, but we're not there yet.

Okay. So everyone of you probably exists in a business that has processes, all right. Some of those processes work well, some of them need improvement. The whole concept of business process improvement is about finding the ones that need fixing and, you know, really going ahead and making those fixes to those processes.

Now, I've been teaching BPI, you know, business process improvement for a number of years. And we follow in the classes that I do Susan Page's book, Power of Business Process Improvement. In that book she has this 10-step roadmap to process improvement.

Some of these steps are fairly straightforward to carry out. Some of them have actually, over time, proven to be fairly challenging for people to do. And so when all the generative AI hype started about a year and a half ago, I figured, well, what would happen if I tried to apply generative AI to those 10 steps?

What if I tried to use it for process improvement? So today we're going to focus primarily on how we would dialogue with our generative AI agents and use those dialogues to help us verify our understanding of the problem.

Now, one of AI's strengths is its ability to present content in lots of different formats and to do so very quickly. Now, this is usually very time-consuming and perhaps boring, error-prone. That's the kind of thing, you know, that's what happens if we try to do that by hand. AI actually makes some of that boring, time-consuming, error-prone work much, much easier.

And I'm going to show you how, you know how that works. Now, of course, one of the things that always comes up is when we provide information about our business processes to AI agents, you know, especially if you're using things like Chat, GPT, Claude, Perplexity, and, you know, Copilot and so forth, you probably are aware that you might be uploading private and confidential information to the cloud.

And that's a security risk, you know, and it's not trivial. And this is why some companies actually prohibit using generative AI. So an alternative is to use a private GPT, in other words, to use an AI agent that lives on your own computer and won't leak proprietary data.

Now, you know, I have a colleague who does a lot more work in that area and, you know, we hope to be able to get him in front of all of you in an upcoming webinar. But I'll make a quick mention of some of the things that I've done in working with people to get to that kind of private GPT issue.

And so finally we're going to see how this whole idea of AI augmented analysis really enables a nimbleness and an agility that's going to allow us to work a whole lot more concurrently and achieve a lot more. You know, you notice I'm dropping in that word Agile'. I've been an agile practitioner for a number of years, many years in fact.

And you know, while this isn't really a presentation about agile development per se or agile methods, I do find that using AI, the fact that AI is able to do things much more quickly for you enables you to be a whole lot more iterative, incremental, agile and so forth.

Okay, so first, a quick refresher of what I mean by generative AI. I've used that term a few times. And if you've never used agents like Chat GPT, when we talk about generative AI, we're really talking about these, these agents, these chatbots, so to speak, where you engage them in a dialogue and they create new content out of, you know, your prompts and their so-called training data.

What information do they already know? Now these are really incredibly simple to get started. You just open the agent, chat with it. All right, Almost all of these provide a way of getting a free account. Use them, ask them things. But what you'll find right away is that the exchange is less like querying a search engine and it's much more like working with another member of your team.

When I started working with Chat GPT way back in, I think it was like November of, you know, 2022, you know, when it first came out. I mean, it's been, it's been a while now, I found myself actually chatting with it as if I were chatting with a member of my team. You know, I wouldn't just ask it questions, I, you know, it would say something and I would go, that's not right, or try again or what about this?

And that kind of natural language processing is what makes it amazing and what can make it really beneficial to you. Now there are many agents out there, so each one of those has its own language models, its own training data. Using different agents really can help you to get a much broader sense of your problem.

Now these are the ones we're most familiar with. These are all centered around what we call large language models. And they allow for these natural language conversations that create the new content out of previously trained data.

Now you know what data, when people talk about training data for something like Chat GPT, well, pretty much everything that's accessible via the web, right? This is going to lead us to a lot of really, really interesting outcomes. Now, of course, there are many, many more agents. There are many more models, so to speak. Once you start to get into this, you discover that, you know, there's sort of the superficial layer that everybody's kind of familiar with, and there's a whole lot more that once you get started, you can get into an exploration.

But you don't have to be a super duper expert in all of this. You don't have to be a Python programmer. You don't have to have, you know, necessarily a machine with six GPUs to be able to get started. And that's course where we're going to be today. So how do we do this? Well, we just say something to an agent. You know, we call that a prompt and then we get back a response.

So here's something I put in my, you know, AI for BA class. You know, I'm doing such a class, business analyst starting a new project. What kinds of things do I need to do? Okay, boom. That's one of the first things it suggests. It says, well, what about creating a project charter? Put the following information in. Now, that's all based on the idea of what's been trained, what's been seen on the web, what is conventional and there out there, and so forth.

So the first concept really is that of using AI as a research tool. Many people really do start using their AI agents as research tools. You know, the sort of that. Tell me about this, write something about this and so forth. Now, generative AI, as I said before, and you'll hear me say this again because this is a really, really important theme.

It can do a whole lot more than just be a search engine. But the idea of using generative AI to do research is a good place to start. Okay, so for example, here we see something of how I got started with the talk. I knew the textbook I used in my classes.

So I just asked, in this case, Perplexity. I just asked Perplexity a relevant question. And then I followed up this question with other questions and began sort of what I felt like was a conversation with an expert to help me outline the kinds of things that I would want to discuss and want to present in this particular class.

Now, what's interesting is, you know how I showed you the five different agents there. I took all five of those agents and I asked the very same question of all of those agents and one of my favorite AI agents, Claude. Actually, this was its response. It said, I don't know about Susan Page's book. Oh, that's interesting.

You know, that's what we sometimes call blind spots in the training data. Right. And so you need to be aware of this. You need to be aware that sometimes an agent may not know about a particular reference.

Now, of course, one of the things that can happen in those cases is sometimes what we call hallucination. Hallucination is basically where an agent makes stuff up. And this is kind of one of the gotchas that you have to deal with when working with generative AI. You know, it's often been compared to an overly eager personal assistant.

And, you know, when unable to find the answer, let' just make something up. Know, sort of like the kid in class who hasn't done the reading, but, you know, called on by the professor confidently gives an answer. Could be absolutely, completely wrong. But it just sounds so good. Right?

So we need to make sure that we guard against hallucinations of that sort. But I'm going to point something out to you. Not all hallucinations are bad.

We're going to actually take advantage of the fact that if we don't provide a lot of information to an agent, it may come up with stuff, and it might help us with our overall brainstorming activity. But we'll get to that later.

Okay, so if you look at this table back here, you'll see that two of the agents were aware of the book. So one of them was ChatGPT. Another one was. One was Perplexity, another was ChatGPT. It came up with this particular list.

All right, great. So at least we know the agents that are aware of this, but that doesn't mean that the others aren't able to help us with some of the work that we're going to be doing in process improvement. Okay, so the first thing we did was we made sure that the agents we're working with kind of understand the context of what we're trying to do.

Now let's actually get into the 10 Steps of Business Process Improvement. You know, the first one, of course, is developing a list of processes. And we're going to continue through that to, you know, get our list to figure out which ones are good candidates for improvement, and then doing the what's the? As is, what's the TO BE doing all of that kind of work.

So always want to make sure that you verify the agents' responses against any source. So what was asked before I checked to make sure that each one of the agents did, in fact, come up with the 10 steps, roughly as Susan Page called them out.

Now, these first prompts that we were doing were pretty easy because we were just seeking out factual information. It's really easy. I mean, it's either, you know, yes, it knows something about it, no, it doesn't know anything about it, or kind of in the middle where, you know, it knows a little something, but it doesn't have the complete answer.

Okay, so we can take in those cases, we just take a quick glance at what the agent produces and right away we're going to know if that's correct and what we need. Now, if it's not right, we might be able to correct the agent's understanding through other prompting.

So once we sort of see this pattern in place, we're going to be able to see how we can use generative AI across all of the steps. Right. So we need to get started. We need to get started with a particular case study problem.

So I'm going to use something that's very close to home for me. Our own condo's homeowners association. If you've ever been on a condo board homeowners association or even just, you know, some sort of a nonprofit volunteer organization, you know that the organization probably has a lot of different processes.

No, by the way, it doesn't have to be a nonprofit volunteer organization. It could be your own business. But I'm just going to use something that's near and dear to my heart because I got a building walkthrough later today. Anyhow, you know, some of the processes work, some of them don't.

You know, and now this is a really important idea. Some processes may exist without anyone ever being aware of the process as a formal entity. Right. In other words, hey, it exists, but, you know, no one calls it out.

So by asking our AI agent, give me a list of processes, I can go through that and actually see. Oh, yeah, we're aware of this. Yeah, we're aware of that. Oh, you know, I don't think we have anything written down for that.

Oh, you know, and of course, so this first step is really about sort of getting to know what we know and what we don't know. And the AI agent is serving as a great brainstorming partner in this.

Now this prompt to get the list of processes is what I call getting started prompt. It's a prompt that's used to kick off the chat. Now, again, this is a slide taken from one of our BA classes. You our AI for BA class, where I talk about three ways to do that, getting started prompt.

If you have a list of processes or more substantial documents, you know, then you can have your agent analyze that list in those documents. Now what we just did a moment ago though was what if we don't have such a list readily available? You know, we could try to brainstorm a list ourselves, but sometimes it's better, I like to say it's better to be an editor with lots of content than a writer staring at a blank page.

So this is where generative AI is really fantastic because generative AI can give you that content. Now, which of these, the three approaches I take depends on what I start with. In the case of the condo association example, you know, I was just being lazy.

I just want to sort of demonstrate to you also how AI could come up with a lot of content. But I also wanted to sort of see how these agents organized the process. What did they come up with?

And you know, even if I have a list, I might still want to give an agent a context-free prompt, in other words, to ask it, what do you think? What would you come up with? And when I say they, you know, I'm going to be asking more than one agent, you know, I'm always out to try to get that second opinion.

So it's always also important when you get these kinds of factual responses to ask for the source of information. In other words, where did you get this? You know, you came up with this list of processes. Can you find any citations, any references on these?

Now some agents, like Perplexity, will provide references by default and other agents, of course, like Chat GPT are going to require that you explicitly ask right. Now doing this, does two things. First of all, it helps you to validate that this is not just, you know, hallucination.

It's not just making something up, you know, it also gives you places to research. I mean, I might look at this list of, you know, links, I might look at some of the references produced by Perplexity and I might say, hey, these are things that we ought to read.

You know, if we're trying to do a process improvement activity, I might want to go out and read the CACM California Association of Community Managers reports or some of their guides and manuals. Not all my board members may in fact be even be aware of that. This provides a great way of doing that research.

And again, its quick. That's really one of the big benefits. Again, just like in medicine, get a second opinion. Okay? Now of course when you have that second opinion or third opinion or fourth opinion and you have your own list and you have all of that you got a lot of content, so you've got multiple responses to the same basic question.

So now it's important to compare and to merge the results. Now you could print them out and sit them side by side, you know, very 1980s style. Or you could say, hey, wait a minute, that's a boring, repetitive job. People hate boring, repetitive.

Computers love boring and repetitive. Guess what? We give it to the computer. So here what I've done is I've taken a list from one agent and I drop it into the prompt for another agent and I say, give me a combined list. Compare and contrast.

Sometimes I might, you know, take two or three or four. Now again, these are features. Some of these features are things that are only available in some of the agents behind the paywall. But, you know, 20 bucks a month, it's worth it.

My time is definitely worth it. My team's time is worth that. Now when we look at this list of processes, I'll come back to that particular previous page. When we look at this list of processes.

Oop, we kind of see a bit of a hierarchy at work here. You know, we see administrative processes, board meetings, scheduling and notification. You might say, oh, we've got big processes and middle-sized processes and little processes.

Now, having been in the world of analysis and modeling for many, many, many years, one of the things that I've realized is that infinite hierarchies like that are mathematically very nice, but they're a pain to manage. So again, I go back to my agent and I say, all right, I've got these three different levels of processes.

Can you come up with some names for these so I can talk about, you know, the big stuff and the middle stuff and the little stuff? And you know, it says, oh, you got domains, functions, processes. Okay, well, it turns out that I don't like that because domain means something else and function means something else.

So, you know, more querying around, more chatting with the agent. And we come up with, you know, systems have categories, categories are grouped and groups have processes. So every process belongs to a group that's part of some category.

Okay, now this is really important because by the time I get done this chat and if you ask for and download our, you know, the full set of chats from this talk, you'll see that, you know, there's an incredible number of processes that our AI agents have generated.

You need to have some mechanisms for organizing all of this content. You can't just keep it in a word document. So having models of this sort will really help you identify the different activities, the different actors, you know, and to get all of that information put together.

Okay, so we've got our list of processes. We got a lot of processes, and it's a pretty large list, thanks to AI. Right. I did a little experiment on another project recently where I had people try to brainstorm.

Actually, we do this in our AI for BA class all the time. I say brainstorm a list of things. And then, of course, now give your AI agent the job of brainstorming. Guess who always wins? The agent. Now, sometimes you come up with things that the agent didn't. That's good.

But, you know, again, you get a lot of stuff. So now we've got that lot of stuff. How are we going to prioritize the work? This is where we get into step two, what Susan Page calls establish the foundation.

Right. Which of these processes are best candidates for improvement? Right. Now, to do this, you know, this is going to require human judgment on two different levels. In fact, you might be asking if AI hasn't really even made this job harder because it's provided you this much larger process list than you would have done by hand.

So in the Susan Page book, she provides a scorecard that is presented as a way of helping you with this prioritization question. She provides a number of criteria for scoring processes. And of course, the idea is that when you get done scoring your processes based on answering these questions, you'll know based on the total score, which are the processes that are most likely the candidates for improvement.

Okay, so again, before we get into building out a scorecard, I'm going to ask the agent, do you know what I'm not looking for? Now, notice that in this case, if you look at the prompt I have here, I didn't ask a question. I just made a statement.

I said, you know, in step two, Page describes a scorecard that can be used for process prioritization. Right. Well, good. You came up with the scorecard. Wonderful. Right. But can we use it?

You know, see, the actual scoring of every process is going to require human judgment. The humans have to understand what they're judging. And of course, sometimes these official definitions, these official textbook definitions just don't make sense to people.

Right, you know, if you're a business process improvement expert. Yeah, they might make sense if you're just learning this stuff. Like if you're taking a class in BPI, um, I often see that confused look on people's faces when we start this particular part of the exercise.

Now, AI helps because this is where you can engage in a dialogue with the agent to make sure that the agent understands the criteria, to make sure everybody understands the criteria. And of course, when you do this, you don't want to check your brain at the door.

You want to look at the criteria, read through it, ask yourself, are they relevant to you? Maybe some of the criteria aren't relevant in your case. Maybe there are other criteria that are meaningful to you.

Maybe they are terms that we're using here that aren't easily understandable. You know, the vocabulary might need tweaking. For example, in this case, you know, there's mention of customer and client. Well, in a homeowners association, we talk about owners and residents and board members and committees and staff.

So we're going to dialogue with the agent and have the questions rewritten in a more usable form. Right. This is really beneficial because you don't have to suffer through bad examples that don't speak to you.

You know, you don't have to deal with examples and try to decipher things that aren't relevant to you. You can, you know, really get the agent to help adjust the criteria. You know, redo it, you know, do a little bit, do it a lot. Okay, whatever you need for this.

All right? Now, in practice, I find that this is an iterative process. We make these adjustments, they're based upon the team and, you know, the team's feedback, the stakeholders feedback and so forth.

You know, so you always want to think about these activities, not as a case of go to your corner and write your document, but instead, you know, we're using the AI agents live with our team and we're responding to their questions, their needs, and their feedback.

Ultimately, the concept of the scorecard for prioritizing processes, you know, is really highly dependent upon the situation. I always try to emphasize that the scorecard that Susan Page presents in her book is only a starting point, right? It's not the only way to evaluate processes.

And again, this is a hard thing to grasp when you're first learning the activities. But adding in generative AI makes it so much easier to develop these alternate scorecards to help you look for other criteria or that might be equally or more applicable to your job.

So ultimately, the goal here is to produce a scorecard. That's the kind of thing we can do with AI, and then to help fill in that scorecard. Now, in practice, I find that it's by using this scorecard that we find many of the issues with the columns and so forth that this becomes a kind of iterative process.

And again, this iteration is a theme that you're going to hear over and over again. Now, the usability of any tool, of any activity is going to depend a lot on an agreement on and an understanding of the vocabulary.

For example, the scorecard asks you, does a formal process exist? Well, if you're outside of BPI experts, the idea of a formal process could mean a lot of different things to different people. And of course, when that happens, your team will find itself arguing over the definition rather than answering what to BPI specialists is a well defined term.

So it's important that the scorecard be usable. Have I said that before? Okay, it needs to be usable. That means iterating over it with your stakeholders to make sure that the questions are relevant to them.

The goal is to make sure that if your stakeholders are going to argue that they're arguing over the quality of the processes and their consistent applications rather than the definitions of the terms.

Remember, our goal for this step was to identify the processes most in need of improvement. The scoring criteria offered up or just to start. Now, of course, what happens if you fill out the table, you run the numbers and you get a number that contradicts your instinct?

Now, it might be a good insight, but it might again call the criteria and the scoring scales into question. Now, the prompt that you see here, suppose our highest totals don't turn out to be the processes that our gut instinct tells us are the most likely candidates for improvement. Again, this is the kind of question I pose in workshops. Great to hear what an AI agent has to say for it.

And a prompt like this, you know, might not necessarily be a way of adjusting the content. I'd like to think of this maybe as more guide for the facilitators. Again, sometimes you want to ask your agents not just for content, but for overall process advice as well.

Okay, we have gotten through about half an hour and just gotten through two steps. Don't worry you we were not going to be taking the same amount of time for every one of the steps. We will wrap up at the scheduled time because most of the remaining steps follow patterns that are similar to the first two.

You know, some might involve using the agents to create and others are going to require more human judgment. All right, so let's look at step three. Draw the process.

You know, we're going to do this step for every one of the processes improved identified as an improvement candidate. Now, actually, I often recommend doing this for every process that's presumed to be defined in some way whether or not if bubbles up to the top as an improvement candidate.

Now, right off, if you look at the prompt that I have here, you'll notice that I'm not asking for a diagram. No, I've also not provided a list of information to the agent. Now, you could, if you wanted to, provide a description of your current process.

However, let's say you don't have anything defined or written down. Now here I'm relying upon the agent's own training data to make a guess at a process. Now, we talked earlier about hallucinations, when an agent simply makes something up because it doesn't have enough trained or prompted information.

Now, most of the time hallucinations are bad, but in this case, I'm taking the approach that I need some sort of a starting point. So I'm going to ask the agent effectively, hey, what do you think this is?

When dealing with relatively common ideas, you'll be surprised at how well the agents understand your problems. And if they don't, you just engage in a dialogue to get that understanding, to develop that understanding.

Now, the last response that you saw there was a lot of text. All that content was good, but perhaps you want a little bit more of a familiar format. Now, one of the things AI excels at is putting content into different forms.

So suppose your team wants to see things in the form of a use case. They're more familiar with the use case format. Okay, great, just ask for it. Just say, give me a use case.

In the supplement book we offer after this webinar, I include other alternate forms, such as checklists. You know, again, you don't have to suffer with formats that don't speak to you.

You don't have to suffer through, you know, overly methodological representations of the information that you need. Now, as I said, you want to create these kinds of models in one form or another or multiple forms for every process that is an improvement candidate or every process that you think that you have.

Now, think of the time that would be required to do this, though, for every process in your business, or even to some subset of processes, this is nearly impossible to do by hand.

So here we can see how AI can be a true accelerator to the work that we doing. Of course, if you have a well-defined process, you can provide it in the prompt. You can even take your existing documentation and get it into the format that you want.

Now, process modeling is a good example of a roadmap step that's a balance between AI work and human work. AI does the brainstorming and the grunt work. But the human oversight is essential for checking and guiding what we're doing.

Okay, now the last step, step three actually said draw the process map. And I gave you a whole bunch of text. Now, for most people, that really means diagram. Now, while the big generative AI agents are still primarily text-based, it is possible to get a precise and usable diagram by asking for a textual model representation and use tools such as Plant UML.

Right? So here, you see, I'm prompting on the left side for the diagram and I'm just continuing the same chat. So the notion of an architectural application process, it's already known by the agent. Then I take the text from the agent copied into a diagram to text tool.

So for example, I might use the Plant UML web server that's available online. Then I can get a diagram. Now, I'll admit that this process sometimes takes a bit of trial and error to get the right format.

And having been in the modeling and diagramming business, having done a book on UML modeling where I, I tweaked the artwork tremendously and, you know, got into Adobe Illustrator to tweak what came out of UML modeling tool and make it look pretty, the results of a tool like Plant UML are sometimes, well, not so pretty.

But you know what, I get it just like that, you know, the speed, you know, overrides any. And the speed not just of creating the diagram, but the speed of being able to create the diagram, discuss the diagram, review the diagram, update the model, get all that done.

Oh, we do all that in 10 minutes. Yeah, I'll take the ugly diagram over the pretty diagram any day. If it's helping me communicate and understand what I'm doing, yeah, I'm good with that.

Now, of course, maybe it's not diagrams. Maybe diagrams are too much for your stakeholders. You know, the what will really speak to your stakeholders is a format that's familiar to them.

All right? If I'm dealing with a bunch of volunteer board members who are used to the old architectural modification application guide, you know, maybe I want to take the process and write it, have the agent write it in the form of a guide for homeowners, right?

It's important really that we put the content in a usable, readable format. We meet the stakeholders on their terms rather than forcing an artificial format on them.

You know, so here I've done something that is, you know, maybe more acceptable to my end users, maybe more acceptable to my stakeholders. They can review this, right?

And of course, we could take this document and compare it against, you know, what we have currently, or we could edit based on that, put those changes in, and the agent then has a better understanding of the overall process.

Okay, three steps. Got through that last one a little more quickly. So our pattern set. I'm going to go through some sample prompts and responses for each of the remaining steps so you can see how AI is an accelerator and gives you examples to discuss and consider.

Okay, step four, Estimate time cost. Many cases, this first prompt is always going that you have for every step is going to be how do I do it? Right?

So how would I do the step? Estimate time and cost. That's a great starting point. Again, this is a way of helping us overcome the getting started problem.

And of course notice that it's saying, hey, do you have historical data? Hmm, maybe we have it. Maybe just by looking at this response we go, oh, you know, it might be useful to go back and look at historical data.

Maybe nobody ever thought of this. So maybe there's a research activity that comes up to figure out, well, how long does the typical review get through each stage of the process?

Of course you want to suggest how to review the process. You know, there was an idea in the original proposal and the agent just simply made a refinement to that. Okay, now there was another step called verify the process map.

Now, I find usually that the step of verifying the process map usually occurs as the process map is being created. Again, this comes back to this idea of working concurrently.

If you think about it, you know, and you know, there isn't just as is and to be. There's really at least three. There’s as the business thinks it should be, there’s as it is actually done.

And of course, both of those could be different from the actual ultimate to BE process. Right now, apply improvement techniques. Now we're in this step, we're back to brainstorming, you know.

And again, this is something our AI agents can do very well as long as you're willing to accept and to edit some less than practical solutions.

Again, you could get an idea that's proposed. You may have no clue what's meant by it. So, well, what do you do? You ask, you know, and if you get something that's totally out of whack, that makes no sense, push back.

AI in this case is kind of like that eager personal assistant who's always willing to be cooperative, who's always willing to be helpful, always trying to be oh so helpful.

And here's the great thing, never takes offense when you say, that's a dumb solution. And yeah, I have actually prompted, you know, sometimes to the agent that will give me ideas and I'll say that's really dumb.

Okay, doesn't report me to anybody. This says, oh, I agree, that's probably not a good solution.

Okay, great. So we can have those conversations with our agents. Create tools, controls, metrics. Again, this is another one of those steps that often proves challenging to people.

Getting started with us, it's often challenging in our classes. AI's ability to brainstorm, you know, again, is effectively giving us a set of ideas so that we are not staring at a blank page with a few lukewarm ideas.

Now, some might be totally obvious, some might be things you're already doing, some may be impractical. Again, remember, your job now is not to just have to be a creator. You now get to be an editor.

The idea of test the process and of course this doesn't mean testing in the sense of software testing. You know, the idea of test the process and where it sits in the roadmap before implement can sometimes be confusing, especially to software people who see tests only as software testing. So again, we can see how AI can give us examples of the activities that it thinks take place at this time.

Now, the final step, implementing the change or next to final step, really getting it explained by AI before starting on the improvement steps can be actually very, very useful, extremely useful for setting the scope and defining the priority of a particular improvement.

For example, this particular response is making it seem that we may need to update our basic governing documents. And that's a very time-consuming, expensive membership election.

So right away I want to go in and challenge a number of the assumptions and perhaps discover that the AI agent maybe is not working with complete information. You know, why do you think we have to have a membership election to do this?

Or maybe we discover that there's some scope changes that makes things easier to implement. Again, it comes down to paying attention to the complete set of responses, engaging the agent in a dialogue and getting that second opinion.

Oh, by the way, you know, anytime you're dealing with matters involving legal issues, required disclaimer. It is always advised to consult with your legal counsel and hope they're real legal counsel and not an AI chatbot.

Okay, final step of the process is actually reminder to look for ways to make continuous improvement part of your regular operations. Again, what are the kinds of things that we should be doing, you know, as an organization so that we don't have to deal with the big bang process improvement?

Okay, so I've shown you different kinds of prompts that can be used to accelerate each of the 10 steps on the process improvement roadmap. There are two more remaining things I want to mention just very quickly.

One of these is the problem that really is often the number one question that comes up when people ask about AI for business analysis. Many organizations are understandably reluctant to share private and confidential information with cloud-based AI agents.

So configuring a private GPT. Sometimes people call this a personal LLM, personal large language model by doing things like combining a tool like LM Studio, Anything LLM, training it on your own documents. When you set up one of those things you can run completely disconnected from the web.

Now you'll notice right away it's slower and you might have to pick different language models in order to get the same know kinds of behavior that you get from say a Chat GPT. But it gives you the ability to provide prompts and documents and nothing escapes into the cloud.

Right. Again, keep your eyes peeled for further sessions that we'll be running because this is a big hot topic. This is an area that is growing quite a bit and really the promise of something of this sort is very important because it says, you know, even if you are concerned about leaking private and confidential information out to the web, maybe setting up something of this sort may be a way of getting started using generative AI in your own business.

And finally, because the time to do the work is radically shortened, we're really able to take a much more iterative and incremental approach to all of our analysis work. And this idea isn't just for process improvement. Really any analysis activity can be made more agile through the use of generative AI.

Once you let AI take on the boring and repetitive work, you're really freed up for that much more creative work. Once you're no longer a writer staring at a blank page, but an editor with lots of content, you get to focus on that content and make real business process improve.

Okay, I want to thank you and let me open this up to Tiffany for questions that you might have. That is perfect. Thank you so much. I will go ahead and officially open us up for questions for a Q and A.

That was a great presentation, lots of excellent information, so thank you for that. I always have the pleasure of kind of listening back and picking my favorite like soundbite and I think today it was just like medicine get a second opinion.

So that's just my little personal one. But there's tons of great questions. I'm going to dive in.

And in the Q and A box, I have a question that says our first question, which was which AI agent is currently the best for business analysts? A different one from last week.

You know, I think the best answer to that question, you know, the best non-snarky answer is go try them. Okay. Because the kinds of things that you need to do. Oh, yeah, I can give you another answer, the one that gives you the best answers with the least amount of work.

But I find that they, you know, again, it often depends. I have paid accounts on Chat, GPT, on Claude, on Perplexity, you know, all of those.

And I will go off to Copilot, I'll go off to Google, Gemini, you know, for certain things, because each one has its own. You'll find each one has its own strengths and weaknesses.

Good answer. Yeah, it's rapidly changing, for sure. I've got a question here from Sarah and the question is, besides process improvement, how does AI help with tariff calculations?

I'm sorry, with what calculations? Tariff calculations. Tariff calculations. Yeah. I don't know, but we could ask. And that's a really, you know, that's, you know, again, not trying to sound snarky. That is exactly how I would go to an agent.

I'd go to Chat GPT and say, you know, how does it help with tariff calculations?

Now, more than likely, and I'm just making a guess on this, it may or may not answer a question the way that you want to.

So as a result, if it doesn't answer it the way that you want to, if it's hallucinating an answer, so to speak. This is an opportunity to now engage in a dialogue. You know, you maybe even get it to ask you for some questions to help clarify what it's looking for. Good answer.

All right, this one was over from the chat and it was Katie was asking, have you tried process manager, formerly known as promap from Nintex? No, but, you know, I'll look in the chat and I'll be certain to look it up. Sounds good. Sounds good.

All right, we'll squeeze in a few more. There's quite a few. Victoria is asking, do you inform your customers when you use AI to generate any BA artifacts?

If so, what was their reaction or what has their reaction been? Well, I think. Okay, let me turn that question around a little bit. I think question, the answer is, should you inform? Yes, absolutely.

Not to be, not to say, oh, I'm cheating. Okay. But rather to say, you know, we're making use of these agents and we're going to get a lot more information.

Now, of course, I'm always cautious never to go uploading proprietary information and this often includes even just the innocence of a particular prompt know, because it might, you know, some organizations are very concerned, if you like coming back to that tariff question, that I might engage in that dialogue a little bit and I might give away certain facts about a particular organization.

Now, how do I know if the agent is leaking that information? Again, we don't know yet. So I would be fairly cautious about that.

But there are a lot of places where again, you know, even the private LLMs do have a good sense of what's a use case, what's a user story, what's a data model. And so being able to say I've gone out and used generative AI to produce this first cut of a data model, you know.

You know, I think it's worth saying it's just because it's like any other tool. Right? Yeah. And the transparency, I'm sure, goes a long way just between the parties involved. Okay, we will squeeze in one more question because we have a minute and this one is from Erica.

And the question is, do you think the free AI agents are still as good as the paid ones? Again, depends on, if you ask me, are they as good for learning and getting started? Oh yeah, they're just fine.

You will hit a point where either you're going to want more compute, in other words, you're going to want to be able to do more work, you're going to want more depth, you're going to want more features, but you can start with the free things and build up from there. Sounds good, Sounds good.

Well, that is all our time for today. This was such a great topic to cover and thank you so much, Mark for leading this outstanding session and such an intriguing topic.

We appreciate your time and your insights and how AI can help support professionals in their daily work and going forward. Thank you to our attendees for being here today.

We will send a follow-up email with access to the slides today plus some other additional information from Softed.

So thank you again for joining us. I see tons of emoji reactions, so everyone feels the same way for a wonderful session.

So thank you again and we'll see you next time. Thanks everybody. Bye-bye. Thank you. Take care. Enjoy.