Evolving the Enterprise
Welcome to 'Evolving the Enterprise.' A podcast that brings together thought leaders from the worlds of data, automation, AI, integration, and more. Join SnapLogic’s Chief Marketing Officer, Dayle Hall, as we delve into captivating stories of enterprise technology successes, and failures, through lively discussions with industry-leading executives and experts. Together, we'll explore the real-world challenges and opportunities that companies face as they reshape the future of work.
Evolving the Enterprise
From Data Scientist to AI Strategist: Atalia Horenshtien on Building the Enterprise of the Future
In this episode of Evolving the Enterprise, Atalia Horenshtien, Consulting Client Executive at Hakkōda, shares how organizations can move beyond AI experimentation to real-world, scalable impact. Drawing from her work with companies across industries, she reveals what it takes to drive adoption and deliver measurable business value.
We dive into the mindset shift leaders need—focusing on the right business problems, building trust in AI, and bridging the gap between data science teams and decision-makers. Atalia also tackles the common challenges of AI integration, from technical barriers to cultural resistance.
Whether you’re a business leader, data professional, or simply curious about AI’s potential, this conversation delivers practical steps to integrate AI into your strategy, unlock efficiencies, and prepare for the future of work.
From Data Scientist to AI Strategist: Atalia Horenshtien on Building the Enterprise of the Future
Dayle Hall:
Hi, and welcome to our latest episode of the podcast. This is where we dive deep into the strategies and technologies that are driving business transformation. And I'm your host, Dayle Hall, CMO of SnapLogic.
Today, I have an amazing guest. I actually did a session with Atalia at a Snowflake event. She is incredibly smart. And I always like having smarter people than me because they make me feel smarter on these types of podcasts.
Anyway, Atalia is a data and AI strategist. She has deep expertise in all aspects of enterprise transformation. She's currently a consulting account executive at Hakkoda, which is an IBM company. This is where she helps global enterprises do things like accelerate data and AI journey, modernizing legacy systems, adopting intelligent automation strategies, and obviously, hot topics of today's, scaling generative AI and agentic solutions.
We're going to get into her background in a second, but she is a data scientist. She's worked closely with a bunch of executives worldwide to turn things like technical innovation into actual business value. So not only is she a smart technologist, she also understands how to apply that technology into a business setting, which I think is a very unique skill. She's a recognized thought leader. She's been in a bunch of top-tier media outlets and podcasts, and she spoke at global events like Google Next and Snowflake Summit.
Please welcome to our show, Atalia Horshenstein. Thank you.
Atalia Horenshtien:
Thank you. Thank you so much for having me, Dayle. I'm very excited. Having those podcasts is so important. It's one of the ways to just share your thoughts, engage with people, and just increase feasibility, increase the awareness around where AI is, reduce some of the boundaries and fears. So I'm very excited about that. Thank you.
Dayle Hall:
Yeah. I think as things like podcasts have become more prevalent, what I like about them is I get to talk to people like you that, if it wasn't just a SnapLogic scenario, like we did in your previous role at Snowflake Summit, which was excellent, but now we just get to talk about the trends that are happening. So they're less focused on specific technologies that we both work on. So these are really fun.
I want the audience to get to know you a little bit. Give me a couple of minutes’ background. We met when you were at your last role at Customertimes, but you have a very unique background because you were, or probably still have that capability as a data scientist. And now you've moved on to probably more business facing. Give the audience a couple of minutes, like how did you go from becoming data scientist and then more of helping other customers with their AI and tech journey?
Atalia Horenshtien:
Yeah. I think it's even started before being a data scientist, many of us in the field. I was actually this cool geek that really developed software when I was younger after college. I've always liked people. I always liked solving problems. And as a software engineer, you're stuck behind your computer all the time. When I went to the field and tested some of my developments and I started to explain the concept to people and really engage with the business, I'm like, wow, I actually like it. So I think this is where my journey shifted.
Through my career, I mostly worked for product companies. I had defense. I had cybersecurity. I had AI. And in the last year, I moved to consulting and services, which, for me, was an interesting pivot because from trying to help companies within one specific landscape or one specific product, now I have the ability to look across the organization and help them in multiple processes, multiple platforms. So I really like this approach.
Dayle Hall:
Yeah. No, that's good. I feel like it's a little bit as a CMO when you start to look across multiple aspects of the business. It gives you a really good insight as to what's going on. I think you're an example of that for career-wide, which is you get to really dig into the technology, but you get to see and help customers be more successful. And I hope we're going to bring some of those examples in.
Before we kick into those specific examples, I have a couple of questions just around AI specifically. One, first question is, how are you using AI or agentic today? And what are you doing with it to make you more productive? The second question is, what have you seen that has been a big impact, could be within a previous company, within customers? Have you seen something that you've been like, wow, this is really cool that people have started to use generative AI or agents at this point?
Atalia Horenshtien:
Yeah. I've been in the AI space for a couple of years. At the time, AI was something that no one spoke about or knew what I'm doing, like, what is data science? So it wasn't new to me. I actually studied in grad school and worked professionally in AI, but then generative AI came into place, right? I think that was the aha moment where suddenly the business really started to think about it. I've had some projects in the past on traditional machine learning, aka predictive analytics, with fun projects around predicting goal likelihood during the World Cup and March Madness Bracket. That was the first time I actually used AI for fun projects in a professional world.
But then most recently with agentic AI and generative AI, I actually leveraged many other people. ChatGPT, where I created either a custom GPT or created my own agent. I just recently moved to Florida, and I was looking to do some analysis around the neighborhood and trends and compare some houses, or I tried to plan my honeymoon, and I tried to do some different comparisons. Instead of spending hours and looking for different resources, I actually just really specify what I'm looking for. And within seconds, I've got great analysis, comparison, difference. So that was amazing.
Dayle Hall:
Very cool. Very cool. One of the things I'm debating whether I can talk about this publicly to the analysts, but with one of our ICs, we're using SnapLogic to build an RFI agent, which is if you've ever worked with any of the analysts, and I'm not going to criticize them on this podcast, but there are so many analysts and so many RFIs that we have to fill in. And some of these are intense, hundreds of questions, and they give you two or three weeks to do it.
One of my very smart and talented SEs at SnapLogic is helping me build an RFI agent that will draft some of the initial responses based on our own internal information, public information and that kind of thing, to give us a head start. And I was like, I think it's one of the coolest things ever. So I'm going to try and be the most AI-forward CMO over the next 12 months and see how I can be a little bit more productive. So it's great to hear how people are using it.
Anyway, okay, so let's dig in a little bit. There's a lot of terms, a lot of terminology. You've said you've been working with AI, and it's not an unknown term. But then generative AI came out, and then things really started to change with using LLMs. And now we have agentic AI. Depending on who you talk to, and I've talked to a bunch of customers that whilst they're doing automated workflows, sometimes they define it as an agent. Not a true agent as we would understand it, but I think this is actually an interesting starting point. How do you define the term agentic AI, and how do you define that in the context of enterprise, integration, automation? So let's get your definition first.
Atalia Horenshtien:
Yeah. I think starting with the basics of, first, let's understand what is automation and how agentic AI is different from that. Automation, think about it like following a script. You set a vocal, if this, then that. It's really great for repetitive tasks, but honestly, it's really dumb. It cannot adjust or understand. For example, you are on a train, right? You run on a specific track, but maybe there is a better route. The train can adjust and shift to this better route, just because it's already programmed.
Agentic AI is different because this is where it has a goal. It understands language, which is amazing. It can reason through ambiguity, and it can actually make decision and learn from feedback. So here, it's not just about doing tasks like the automation. It actually can think through them. It can actually get better through them. Imagine you have a new person in your team, and they figured out things without even asking you every second. This is really a big shift. So no more hard-coded bots. This is where we have some digital workers that can really adapt. They can react and even talk to other systems.
Dayle Hall:
So that's good. That's good. I like that definition. One of the things that I think we're also looking at, and I know it's becoming more prevalent, is agents talking to other agents. So not just maybe being smarter on the train track analogy, finding the best route and being able to change. What's your thoughts about when agents talk to other agents? Because I know that out there as well in the public domain, there's obviously some concern around, is this going to take people's jobs? I think it will hopefully help people be more effective, more productive. And I certainly don't want to go down the employment route on this podcast. It's definitely a different discussion. But what are you feeling about- are we close to where agents are going to talk to other agents? Are we far away from it? What are your thoughts?
Atalia Horenshtien:
It's happening already. Agents are talking to other agents. That's amazing because when you define an agent, it has a specific goal, and you want it to be very specific because you don't want it to make mistakes, right? So when you give different tasks to different agents, and now you can transfer the mission from one to another and build a pipeline from small, mini, micro tasks to one big task, this is really where the change is and this is where you can really apply automation that is backed by AI to multiple and complete processes.
But for your previous comment on people's jobs, I don't see it going away. What I like to use is the 80-20 rule where you actually give 80% to the machine, where you look on the heavy, repetitive tasks so people can really focus more in creativity and in checking hallucinations and their expertise as part of the process. So it's not going to go away and it's definitely not taking people's jobs. That's something that I 100% think about. And it's not just my thought. I read an article recently where Snowflake CEO just said recently, AI agents won't replace the people doing the work but by eliminating the friction that slows them down. So we need to shift from humans' time versus AI machine time now.
Dayle Hall:
Yeah, I know. That's really interesting. You just had a great description of an analogy of what an agent is supposed to do, agents talking to each other, agents being more productive. Let's take a step back and talk about what is a good foundation. To be able to leverage generative AI or agentic, there's obviously some things that a lot of enterprises and infrastructure that is needed to make sure you can leverage it. And someone like yourself who understands the technology and has talked to a lot of customers and people trying to do this, I think you're in a very unique position to be able to talk about what is needed. So let's talk about a baseline. What kind of infrastructure, what kind of technology, what does a typical enterprise need to actually really leverage and take advantage of this new wave?
Atalia Horenshtien:
Yeah, you can't just slap AI onto chaos and expect results. Some of the things we were talking together at Snowflake Summit when we did this customer story, I think that the foundation starts with, first, good and accessible data. If you have data that is stuck on-prem or in spreadsheets from 2009, forget it, it's not going to work. So AI really needs connected, cloud-ready data to work with, and SnapLogic supports that journey as well. So that's really the basics.
And then you need to look on, do you have the right infrastructure? Cloud, GPUs, storage, API, those things that are not just glamorous, but they're critical if you want your agents to run smoothly and securely. Do you have the right people to do the job? It's not just about building it. It's also productionizing it and backing everything around governance and security, which is something that you have to have today, regardless if you are regulated or not regulated, especially when you include agents and generative AI as part of the process.
Dayle Hall:
Yeah, I like that. I like you said people. We just talked about, is it going to make people go? No, it's not going to take people away, but it's going to make them more productive. So you have to have the right people. Governance, very important. Particularly, I've done a bunch of podcasts talking about managing ethical AI and using the data and the governance of it. That's really important.
I like what you said, clean, connected and cloud-ready data. I like that a lot. It's a good term that I think is always that snappy. I'm going to remember that. I might use that actually when I promote this podcast.
Atalia Horenshtien:
Go for it.
Dayle Hall:
That's a good one. You should trademark that one, Atalia. No, but I think those aspects are important.
Now, when you've talked to customers when they've started this journey and the work that you've been doing, how ready are enterprises for this? Because you mentioned, okay, good and accessible data, clean, connected, cloud-ready, people and governance. Be honest, when you talk to customers and you talk to people that are on this journey, how ready are they?
Atalia Horenshtien:
That's a very good question. I don't think there is one answer to it, because I've seen the readiness divided into two main categories. One is depending on your vertical. Some verticals are more ready than the others. Obviously, the more regulated one, it's just more difficult to move into a cloud-based solution. You have all those HIPAA compliance, CCPA, GDPR. On the other hand, those companies like financial services and healthcare, they also have big talent and a lot of budget. So it's a conflicting thing for them.
But also, even within the organization itself, even if you are cloud-ready and you have the right infrastructure, then you're looking on, the second layer is which data you have in the cloud versus which data you have on-prem. Many companies will keep this hybrid mode where they still have some data on-prem, the most critical data, versus some of the data can be stored in the cloud. So it's a process. I'm not sure when are we going to be 100% cloud-ready, but I see this hybrid approach. This is the most successful one so far.
Dayle Hall:
Yeah. In terms of the verticals that you mentioned, which ones are you seeing that are incredibly- or not incredibly advanced, but some that are a little bit more willing to move forward? I know you mentioned the verticals. Obviously, if you're in healthcare, there's a lot more controls. So who is doing it?
Atalia Horenshtien:
I think financial services is one of the most advanced verticals that I've seen out there. Where things are easier is for retail and CPG.
Dayle Hall:
Okay. Oh, interesting. FSI is interesting because, again, it's actually relatively highly regulated for the most part. But yeah.
Atalia Horenshtien:
It is. FSI's been in the AI machine learning space for quite a while. They've been running predictive analytics for so many years. They know the AI space very well, and they have really supportive infrastructure and talent to do that as well. This is where I'm not even talking about budget. Yes, they are very mature in not just their understanding, but also the infrastructure that they have in place.
Dayle Hall:
Yeah. That's interesting. So a little bit out there, what you have to have in place and be ready to really take advantage of it. Let's talk a little about use cases then to start using agentic and AI. Let's say you have the right people, governance. You've got access to the data. You’ve put it in a place where you can actually manipulate it and use it to get the most out of AI. What are you seeing around initial use cases, generative AI or actually agentic use cases? What are you seeing as the top ones, and what are the ones that are really interesting that customers are doing?
Atalia Horenshtien:
I think what's interesting here is that many companies, and this is what I advise as well, they start actually with internal use cases. This is where it’s a safe space, no customer impact if it breaks. Every company today has a custom GPT. If you don't have one, you're really behind.
Dayle Hall:
Oh, wow. Okay.
Atalia Horenshtien:
You have companies that having auto-generated internal documentation and things that support marketing, things around data with matching schemas and data cleaning and pre-processing. Not those wow projects that you might expect, but they're very, very useful, and they support the mission of doing things faster, doing more with less, reducing FTEs, being more innovative. These are the areas where I see for now agentic AI being part of. It's not a surprise as well because let's not forget that agentic AI is still relatively new.
Dayle Hall:
Oh, yeah.
Atalia Horenshtien:
So many years to have even those metrics around predictive analytics. You need to trust it a little bit more. You need to have guardrails in place. Those things take time.
Dayle Hall:
Yeah, it's a good point there. We'll talk about measuring impact in a bit. If you're talking to customers and they're trying something new around a use case that you just said they have to give it time to actually work, if you talk to a customer, maybe they're talking to their own executive team, what is the expectation around how fast and how much of an impact this can have within a short space of time? And do you think that there is an expectation that it is so advanced, it's just going to revolutionize everything so quickly? Do, let’s say, CEOs or CFOs or whatever, do they have patience, or they just expect immediate results?
Atalia Horenshtien:
Some have-
Dayle Hall:
You don't have to give names, Atalia. You don't have to give a specific name.
Atalia Horenshtien:
That's fine. Some have patience, some don't have patience. I think when you look on the metrics on how you can really check that there was an impact here, it's not just the metrics. It's about whether the work feels different, whether the work feels easier. And you really need to give it some time, right? So you want to know things like, did it speed things up? How did it affect the team? What are the new things that the team is now capable of doing? How did it affect a customer experience if it's something that is externally, or how it armed the internal team, like marketing. I've seen so many people afraid about writing one paragraph, and now with generative AI, they feel so confident to write an email or write a blog post. So you need to put on the positives. We are still navigating the challenge on how to really measure generative AI and agentic.
Dayle Hall:
Yeah. Again, like you said, it's all very new. And I think as individuals, if we're using it, we feel a little bit more productive. We probably feel a little bit more confident in some of the things.
Let's talk a little bit now about measuring. You said it yourself, it's a little bit hard to measure some of these things. Some of it is about our personal, how we feel more productive. If you're advising someone, if you're talking to a prospect about potentially starting some of these use cases, what would you advise around a good measurement of success based on it? If you've got a use case as an example, that's fine. If not, how would you advise them about, look, if we can do X, Y, and Z using generative AI or an agentic use case, if we're doing it X percent faster or whatever we're saving time, what is that best way to measure success? Maybe short term and then maybe long term.
Atalia Horenshtien:
Yeah. I think things like time saved, how many hours did people really reclaim, cycle time reduction, is now your decision-making process faster. If you do have some use cases around data, is the error or data quality improved over time? Is the output accurate? Adoption is a great success metric. Are people really using it, or was it just a solution that was developed that was held in the shelf? One of my favorites actually is the AI uplift. So compare the before and after, generally on the entire process, and see what have changed, how did it affect headcount. I think these are some good ones.
Dayle Hall:
Yeah, that's good. Definitely, we have one of the first ones we built internally with SnapLogic itself, is we built an agent for one of our finance leads around reconciling end-of-quarter invoices and contracts. It was something that she would have to spend a long time looking through, either looking through manually, looking through PDF contracts. Sometimes there's little changes that you miss. Her month-end close now has gone from 20 hours to about an hour and a half.
I know it sounds terrifying to say this, but we recovered about 2% extra revenue because the agent was able to be more cautious looking through contracts and little changes. And that's actually a big impact, 2%. It doesn't sound like a lot. When you're $100 million, $150 million company, that's a big chunk. That's a good example of a use case and it didn't take that long to build. It just runs in the background. Those are obviously good metrics to say we've been successful, we're getting more money.
Atalia Horenshtien:
Yeah. And when you mentioned the use cases, obviously, there are some examples, but there are some pitfalls and things to avoid while you're going through this exercise. I have a recipe for three don'ts.
Dayle Hall:
Okay, give me the recipe.
Atalia Horenshtien:
Don't start with a shiny tool, or as I like to call it, the solution. Look for the problem first. An example I like to give is you have flowers, you want to enjoy them more. Don't buy a new vase, just because you want to enjoy them. Maybe there's other solution. Maybe it's the lightning moving them around. So start with what you're trying to solve, and then you'll figure out how later.
Another one is don't put IT in charge.
Dayle Hall:
That's an interesting one. That might be controversial, Atalia.
Atalia Horenshtien:
It is a little bit, but let me explain. Many organizations, they have a centralized IT that serves multiple business units. Obviously, they're doing the best job around managing infrastructure. And I'm speaking to you more around the use cases. You want to make sure that the decisions that the business will take will be aligned with the business goals and the business needs. Will this actually help your team? If you sometimes create some projects that can be interesting because they are cool or whatever, but they're not aligning with the business goals, what did you do here?
Dayle Hall:
Yeah, yeah.
Atalia Horenshtien:
And the last one is don't try to be perfect, because really, perfection never ships. I've seen teams spending months and months building the ideal solution, but that never launches. AI can be messy, start small, give your team room to try, to learn, test and improve on the go. So even if you decide on one specific use case, maybe start with a small chunk of data and a small part of the job and then increase it over time, but just bring something to production.
Dayle Hall:
Those are really good, the three don'ts. I like that. Have you seen customers who have maybe started a journey, started a project for use case, and they, we use the term scope creep, maybe they over-engineer what they're trying to do. Maybe they can't get the right alignment across the business. Maybe they start a project and can't get access to the data that you needed. I like what you said about starting small in terms of what it can do and expand from there. But have you seen examples where people just boiled the ocean and just not got anywhere close to what they thought was the output?
Atalia Horenshtien:
Yeah, a hundred percent. I like to run workshops where there was actually a use case card. Before diving into the use cases, I'm trying to run an exercise with my customers around putting use cases into impact visibility metrics. The impact you look on, is there a real pain? Are you trying to quantify the added dollar amount? We're still a profitable business, right?
And on feasibility, you need to look on things like, where are we with the essential ingredients, what we spoke about earlier? Do we have a business sponsor? Do we have a budget? Can we test fast? Do we have any blockers? When you grade them, then you take the use case and you double down on all those factors and trying to find, where are the blockers? What are the issues? If you don't have a complete picture on what are your next steps, now you can be successful in this use case, don't even bother to start.
Dayle Hall:
If you have those conversations, do people- do they accept it? Again, you're talking to so many customers, you would hope that they're bringing you in because of the experience, but they're like, oh, we're going to be fine. Do they grasp what you're trying to tell them, or does it take a little bit of time?
Atalia Horenshtien:
They love it. I've been witness to great conversations of people within the same team that didn't agree with each other. But that's a great safe space of bringing the business together with IT to the same room and doing this exercise together. So you're like checking, not just on the use case, but also the readiness of your organization. So expectations are aligned, and then every person on the team can actually contribute from their knowledge and their experience into the use case. So you actually get a really reliable expectation of what's the probability of this use case to be successful.
Dayle Hall:
That's great. To be honest, if with most companies, they're going to start these kind of journeys, that's exactly what we're trying to get to. So that's good. I'm sure you're really helping out a lot of these companies.
As we wrap this up, I want to go back to what we talked earlier about the people. You specifically called out one of the four areas to think about and have things in place is the right people. How are you seeing roles change and evolve across the business, whether it's IT or the business functions, as AI becomes more relevant and prevalent in these organizations? Are you seeing a shift in terms of what people are doing? What about things like are companies training for these kinds of things? What are they putting in place?
Atalia Horenshtien:
A quote that I really like to use is AI won't replace humans, but humans with AI will replace humans without AI. So you actually are going to look on roles where people are leveraging AI to do better, to do faster. I've seen analysts turning into prompt engineers. Data engineers are now building agent workflows. Even the business leaders, suddenly learning about AI and embeddings. They need now to speak both business and AI fluently.
Dayle Hall:
Yep.
Atalia Horenshtien:
So that's very, very critical. And when you look on onboarding, first of all, you need to be very transparent with your team on what tools you're bringing, how that will affect, look for risk healing, don't replace people, train your managers to do well, introduce AI from an early stage, even in onboarding, and let people the ground to really test things. And even being co-creator, maybe people will come with some innovative ideas to actually support the organization internally and externally.
Dayle Hall:
I know someone who was recently interviewed at a very large multi-billion-dollar tech company. What was interesting is, in that interview process, they proactively said, we encourage you to use AI as you do research, preparing for the presentation. They want to know what the candidate did during that process. One of the things which I thought was really interesting that I've started to use in my own interviews when I'm recruiting for the team is, the question is, write me a prompt to solve a certain scenario.
So if you're interviewing for someone in like, as I would be, potentially in product marketing, you give them a scenario you're trying to solve, something you're trying to create, and you ask them, what's the kind of prompt that you would write? Because it has to be a little bit more detailed than just saying, please write this in the style of. The more detail- it's a good way of actually showing how people think about it. I think you're right, it's going to become more of an important skill, not just getting the output, but how you write the prompt.
Atalia Horenshtien:
Yeah. I can also share a project that happened at Hakkoda that turned to be super successful. There was actually a support engineer that worked on an internal project. This project became an AI agent that can help in data migration processes. And this already earned a CIO a hundred awards. So things can really turn into different places when you have your team embedded in this process.
Dayle Hall:
Yeah. As you go in, and you mentioned some analysts are now turning into prompt engineers and people are going to upskill, as part of your customer prospect engagement type of activities, are you also advising the organizations, the leaders, what to do around potentially- maybe not reorganizing their teams. You mentioned a lot of this should come from business units, not necessarily IT. But are you advising them on what innovative companies are doing around structure within their organization? You mentioned earlier governance. How are you advising those organizations to really set their team up for success?
Atalia Horenshtien:
Yeah. I think you're looking to find all those blind spots where the business doesn't have any support right now. And together you're looking into the different roles and processes and what you can automate versus what are some of the other tasks that this individual or this team can take. And this is where you fill out the gaps on what the business doesn't do today.
And when you look on governance- first of all, there are in the market multiple guardrails around hallucination and safety on your prompts, on your results, on your AI systems, but also keeping the human in the loop, really eliminate hallucinations and having a practice, we call it in the technical world, observability. So really digging into the data under the hood, what things people ask, what actually my agent replied, and dig into, were they supposed to ask that? Were they supposed to have access to that? Are the answers that the LLM gave correct? Do you have this thumbs up down mechanism? So really running a mini analysis project into your underhood data behind this LLM solution.
Dayle Hall:
Yeah. I know at the start, as we start to really dig into generative AI and LLMs, there was a lot of hallucinations. There's obviously some brilliant stories about the lawyer that used the LLM to write a response, and the LLM just completely fabricated everything. I don't hear as much about hallucinations these days. Are you still seeing some of that? Is it the AI is getting so much better that it's less of an issue? Clearly, you still have to have the humans check it, but is it still a big issue when you talk to customers?
Atalia Horenshtien:
It is a big issue, specifically in certain topics. LLMs, and it really depends if it's a third party, if it's an open source, they're being trained on a specific set of data. So if you're not going to introduce the specific data in your domain, it won't have those answers and it will hallucinate. So you can still look on companies that are building with open source, like fine-tuned LLM based on their own data to really have less hallucinations. It's still out there. Obviously, there are solutions and guardrails to put in place, but it's still a big topic.
Dayle Hall:
Yeah. Okay. We're almost at the end. I have just a closing question that I like to ask people on the podcast, because obviously we're in a very changing world. I joke with my kids that when I was in college- my daughter, she's a senior, she's going to be a senior this year, so we're talking about all the college stuff. And yes, we had machines, but the internet was just starting to become more prevalent. We were putting cables between our machines so we could game together on the same game. Things have changed drastically. And what she's seeing now and the AI technology and what she's going to be able to leverage is light years ahead of when I was in college. I'm pretty old, so I'm not saying that's a good thing or a bad thing.
One of the final questions I like to ask here is of all the changes you're seeing, of what you think the technology is capable of, what is something that you're really excited about the potential of AI, about where you think it could take it? You mentioned actually earlier about doing a search for honeymoons or for real estate. Is there something, not just what you're excited about seeing, but something around personally how you think it might impact your life in the future? What really excites you about what we're seeing around AI and generative AI and agents?
Atalia Horenshtien:
That's a very good one. I think as a whole- I mean, before we're going to dive into this, obviously AGI is coming in. I'm really excited to see what's going to happen on that space. But generally speaking, I think the ability to really change completely the way that we solve all problems and really free up our space to do more innovative stuff. So many of us are buried in day-to-day repetitive work. We're getting very burned out. We're not feeling that we are maximizing our talent. So just having the ability to focus on more creativity, more innovation versus the manual repetitive stuff and being able to do things that I don't have the skill sets to do just because now I have this agent that can support me, I think that's pretty remarkable.
Dayle Hall:
I think it's amazing what we're seeing. It feels like we're still in its infancy, scratching the surface, whatever you want to say.
Look, I really appreciate your time. What I always like to take from these is there's always little snippets of things to remember, because I always think about if you're listening to this podcast, what I want people to do is have something to take away. So they listen to it. They're driving to work. It's good to have these to take away and you get in the office, if people are still going into the office. And then you're like, I remember hearing that.
So what I really like, the things that stick with me from what you said today, the baselines of having a good organization, good and accessible data, clean, connected, cloud -ready. I remember that. Having the right people when we talked about how to structure and training and so on, and then having governance, having the right governance for the data and for AI to make sure you can actually get the most out of it.
And I'm going to remember your three don'ts. So the three don'ts, don't start with having a solution. Look for the problem. I think that was your first one. Don't put IT in charge. That I really liked. I'm sure there's going to be some controversy there if someone's listening to this and you're in IT. But I liked it.
Atalia Horenshtien:
It's a business idea.
Dayle Hall:
As a business idea. Yeah, but I think that's key. And I do think there are a lot of creative people that aren't necessarily within IT. IT can obviously be part of the solution. And I like the don't try to be perfect. Small wins. Set something up and then expand from there. I appreciate going through those. Thanks so much for joining us.
Atalia Horenshtien:
Thank you so much for having me.
Dayle Hall:
Yeah. Okay. That is the end of this episode. Hope you've enjoyed that. Hope you've got some little nuggets to take away, and you can take that in your daily work. As always, I'm Dayle Hall, CMO of SnapLogic. And we'll see you on the next episode.