Welcome to Episode #233 of CxOTalk.
I'm Michael Krigsman, and we are streaming live from SAP's big user conference called
Sapphire Now.
And before we go into this discussion, I just want to say "Thank you" to Livestream
because Livestream is our streaming platform; and man, those guys are really good.
And if you ever need a streaming platform, go to Livestream.
They're really good.
So, thank you, Livestream.
So, we're here at Sapphire Now, and I have the privilege of speaking with Mike Flannagan,
who is deeply involved with analytics, with data, and with something new that SAP announced
called Leonardo.
Mike, how are you, and thanks for being here!
[…] Thanks so much for having me!
So, Mike, you're deeply involved with analytics and with data, and with Leonardo, so tell
us about your role and what do you at SAP.
So, officially I'm the Senior Vice President of products for analytics.
And now that we have launched SAP Leonardo, I have also taken on the role of Head of Products
for SAP Leonardo.
And, we had a big announcement this morning by our CEO Bill McDermott.
So, I really want to dive into the issues around data and analytics, but very briefly,
tell us what is SAP Leonardo?
So, it is officially a digital innovation system, but the idea behind SAP Leonardo is
fairly simple.
Everybody struggles with business problems, particularly now with the pace of change and
the need for transformation of digital business.
If you're in retail, problems that you have are not that dissimilar from problems that
your peer companies have.
And the solutions to those problems from a methodology standpoint and a technology standpoint
also have a lot of commonalities.
So why does every company have to feel like they're reinventing the wheel?
And Leonardo is intended to help accelerate digital transformation for companies by leveraging
the SAP's experience with other companies to help them solve the same problems using
the same methodologies and approaches.
Obviously, there's some customization that's involved in the company, but you start with
a nucleus that is able to accelerate solving the business problem.
So there's this combination of technology and business process that kind of move together?
Well obviously, nobody in the C-Suite bubble stands up in the morning and says, "I want
to go buy some digital transformation."
Exactly.
They're thinking about, "How do I improve revenue growth; how do I improve bottom-line
profitability; how do I improve customer experience," so, when you look at those things, you can
break them down into a set of fairly digestible business problems that need to be attacked.
So, if you can very quickly move from the first problem to the first solution, and then
you attack the second problem and second solution, you can move your company along a maturity
curve until you become a fully digital business from a post-transformation … all the way
across.
But if you start saying, "I need to go change everything tomorrow," that seems like almost
an impossible task and a bottomless pit of money.
So, it's important that customers be able to take a little step and see the results
and get the return on that investment, so that if they feel confident taking the next
step and continuing […]. Mike, I think we should begin with a discussion
of data.
And we heard in one of the keynotes this morning the phrase "Data is gold," and we hear
similar kind of sentiments all around the industry.
And so, with digital transformation, let's begin with this notion of, what is the relationship
between the data and the ultimate digital transformation that takes place?
There are all kinds of great analogies in the market; that data's the new gold; data's
the new oil; whatever you prefer, data's a very valuable asset.
And, if you think about your data, that you think about your human capital, you think
about the way you think about your real estate investments, you start managing it as an asset
that has a lot of business value.
Then, you start realizing the transformative power of doing things with that data that
you couldn't do before.
And then, of course, everybody's talking about IoT or the industrial internet, and
that really is about opening up a whole world of data that you didn't have before with
sensors and wearables, and those sorts of thigs; and the transformative power of that
data become exponentially greater because so much more data from which you can draw
insight [becomes available].
So, talking about collecting data from many new sources that even a few years ago were
really hard to imaging; can you give us examples of some of the new data sources that are available
to us?
Sure, absolutely!
I mean, and of course, I think it's worth noting that is it's not just new data sources.
If you've been running your business for a hundred years…
Good point.
… you have a lot of really valuable enterprise data.
I think the power of things like industrial IoT is adding to that some data from new sources,
and so you think about data from sensors.
We've got examples of train companies who outfit the brake systems of their trains with
sensors so that they can measure break wear.
In fact, my car has sensors on the brakes.
It doesn't send me an email, but it gives you a little display on your dashboard.
Everybody can sort of relate to that little e example.
Now imagine you're managing like Trenitalia does to thirty thousand locomotives, and you're
trying to minimize the amount of time that you're out of service for maintenance, both
to decrease your downtime costs, but to improve your customer experience by having trains
running on the tracks.
The ability for them to just add some sensors to monitor a little bit of data about maintenance
really gave them the ability to transform the business process around predictive maintenance.
Sensors are really one example.
Wearables are a new data source.
And you know, I think if you consider those types of data sources, you could imagine what
the future might hold of all kinds of different wearables, embedded sensors…
Video is becoming a really powerful new data source; deep learning starts becoming a more
mature technology.
So, it's an incredibly interesting time for data people.
And these data sources have the power to shape and mold processes.
I mean, just, for example, last week on this show, I spoke with the Chief Marketing Officer,
the CMO, of Aetna, a huge insurance company.
And, he was talking about how they can take wearable data, just as you were describing,
and feed that back to patients in order to increase patient wellness.
So, can you elaborate, then, on the linkage between having these data sources, and changing
processes and even changing business models?
Well, you know, it's interesting.
In the enterprise world, we talk a lot about the business outcomes.
In the Aetna example, what you're talking about is patient outcomes; human outcomes.
Exactly.
If I can improve as a doctor; if I can improve the outcome of interacting with a patient
to extend their life or extend the quality of their life, I mean, that's really exciting.
You know, it's interesting to have business outcomes with more profit and more revenue.
But you know, when we start injecting some of the human discussion about the power and
the potential of this data, we start realizing we can really change the world.
We can change society, we can change the quality of people's lives, and all of that is starting
to be made possible by these new sources of data.
They give you new insight into people.
And, before we go on to the next phase in, shall we say, the life cycle; so, we collected
this data, how do we then start to use it?
Can you give us an example of existing corporate data that we can find new uses for today?
Sure.
So I think if you look at loyalty card data in retail, there's a lot of information
there about purchasing history, purchasing preferences, which stores do you tend to frequent,
those sorts of things.
There's a lot of rich data there.
Historically, it's been used for things like sending you coupons.
But, there's a lot more that can be done with that, particularly if it's augmented
with some new data from new sources.
And, so I think there's a lot of value in the dataset that already exists there, and
as you start thinking about how to augment that with new data, the power of both really
becomes much greater than the sum of the parts.
Okay.
So, we've not got our existing corporate data, that we can make use of in new and better
ways because we can now aggregate it; and we have things we can do with it that we couldn't
do before.
And so, what can we do now with that data that historically, we could not do?
Because it seems like that's the thing that unlocks the power of that existing data.
Yeah.
I think there are advances in analytical techniques, things like machine learning; you know, lots
of industry buzzwords; excitement around machine learning, these days.
The power of machine learning is that it really gives you the ability to go back into data
that may be two, three, ten, twenty years old, and take all of that history that you
have about customers and store operations, and a variety of different things, and turn
that into training data, right?
To teach the machines what a customer looks like.
What does a good customer look like?
What does a bad customer look like?
What does fraud look like?
Those sorts of things require processing the quality of data from which you learn that
a human would be incapable of dealing with, right?
So it has to be about using the power of machines.
And then, obviously, there are examples in manufacturing here you take that learning
and turn it into artificial intelligence; things like robots.
But, there are also examples in customer service, for example, with chatbots, where now I want
to ask a few questions to my bank, and instead of having to have a teller answer the questions,
I can just go online and chat and get automated responses that are amazingly accurate for
the questions I'm asking.
Okay.
So, there are all these things that we can do with the data, but how do we prepare that
data?
How do we prepare …. So, we're collecting that data, we're doing something with it,
and then it can be used in the applications you were describing?
I think there are a couple of different ways to answer that questions, but one that I think
is particularly of interest for a lot of our customer is, when you talk about leveraging
a huge population of data from which to learn, there are concerns about privacy, and there
are concerns about data protection.
And so, one of the things that is, I think, important in every conversation about large
datasets is how do you anonymize that data?
How do you protect the personal information that is contained in that data, how do you
make sure that your policies are such that you only is that data for its intended purpose?
That having been said, part of preparing the data is sometimes normalizing the data so
that things look common across a large dataset.
Also, anonymizing that data.
And so, when you take an aggregate, you can use that data for, let's say, benchmarking.
The power of the average price that I should expect to pay for a bar of soap: I can collect
data from hundreds of different sources.
Some of them may express it in dollars, some in Euros, some in different currency….
I have to normalize that data so it's all a common currency, and then I aggregate that
data.
It doesn't really matter whether the data came from Retailer A, or Retailer B, or Retailer
C, so I can anonymize that part of the dataset.
And, what I'm really looking for Is, "What is the average price in each city, each country,
per bar of soap?"
And then the value of that is a good benchmark for retailers that market to use, but you're
not using any data that's specific to a retailer in a way that's identifiable for
the […]. And where does that normalizing and anonymizing
take place?
Does it take place inside the customer walls?
Does it take place on the platform side, like on the SAP side?
How does that … What's the mechanism for that?
And then for the benchmarking as well that you were just describing?
It really depends.
So, one example of aggregated, anonymized data that is being used for benchmarking is
in SAP Fieldglass.
It's an application that we make available to customers to deal with the contingent workforce.
And, if you look at Fieldglass, we see hundreds of thousands of transactions every year for
people who are looking for jobs, and people who are hiring for temporary workers.
Inside of that application, we can now aggregate and anonymize the data so if I say, "What
should I expect to pay for a salesperson in these three cities?", we have that data.
And we can make that available to customers as live insights in real time.
When they're thinking about what is the right labor rate to offer for this role, they
can see what is a common labor rate that will get them a well-qualified, talented individual
to fill the role in a reasonable timeframe.
So, that kind of data would be aggregated and anonymized and injected back into the
application by SAP at our level.
But, we have an example here, actually, at the Sapphire conference.
Our SAP data network folks are talking about something they're doing with a very large
elevator company, and that data is that customer's data.
So in their case, aggregated, anonymized, and used on their premises in their systems.
And I have to assume that this benchmarking capability, either real-time, in order to
look up … So I want to hire somebody and what are the labor rates, for example, for
this type of position?
Or, historical, "I'm thinking of doing something and I want to know how did we compare
in the last six months to our competitors?"
So, I have to assume this is extremely valuable and this is what customers want.
It seems to be for sure something that we're getting more and more requests to make available.
What I think is interesting is not so much … I mean, [it is] certainly interesting;
the raw benchmarking … What I think is more interesting and what we hear more of a customer
saying, "If you could do more like this, it would be great," is … So I know that
I have a certain budget, and I know I have a certain set of needs, and that set of needs
materializes for me five skills that I need from an individual.
But when I go look at the benchmarking data, the five skills that I need in the market
that I need them in, twice the budget that I have available.
Well, that's not very useful.
All you've done is tell me that I can't hire what I need, and so now what?
I can't afford the thing I want to buy.
That's right!
And so, the more useful thing in that scenario, I think, is to be able to say, "What if
I could compromise and only get three of the skills that I really need?"
Maybe I can teach those other two once the person's on board.
And if that fits my budget, then that becomes sort of a win-win, right?
I get somebody who doesn't do quite what I need; the data they join, but I get them for
the labor rate that I can afford, and I get the opportunity to teach them the things that
they need to come up to … That kind of benchmarking also gives you the ability to say, "Well what's
my next best option?"
And where would this type of calculator be built?
Is this built into the HR application?
Are they doing this new, dare I say, in a spreadsheet?
[Laughter] So, in a spreadsheet is typically how this kind of stuff is managed.
We go out, we take the big salary survey, and we pre-populate a central repository;
generally a spreadsheet, of benchmark labor rates.
That is what we are helping customers move away from.
If you want to run a live business, that's not very real.
And so, the Fieldglass application … And prone to errors .. I didn't mean to
interrupt, but there's a lot of problems with spreadsheets.
But anyways, I didn't mean to interrupt.
I'm sorry.
Oh no!
Absolutely right.
But, I think the key here is that we're injecting that information back into the Fieldglass
application so that it's right there in the workflow when a customer is trying to
populate a new template for a new job, for typically a job posting.
Being able to do that means it's not […], it doesn't sit off to the side of your core
business application.
It is part of your core business application.
And therefore, it's a core part of your … So this type of analysis, then, becomes
a core part of that business process as well.
And that is the key to moving analytics from what it has been up to this point, which is
something that is useful for ten or fifteen percent of your total employee population
to something that is used by one hundred percent.
I have to put that sort of intelligence into the business process.
It can't be a separate thing.
Most common example of this with the consumer is probably Netflix with their recommendation
engine, or Amazon, real products that go with this product; those are recommendation engines
powered by machine learning.
And they're extremely powerful not because of the intelligence behind them, but certainly
for that, but because they're in your process.
While you're looking for a movie, you're seeing the recommendations about movies you
might like based on your previous choices.
While you're buying toilet paper, you see bar soap that most people might buy at the
same time.
That's useful because it's in your process.
And so, it's very easy for you to use it, and very easy for you to see the value.
The key, then, is by building these, let's say features that are backed by this data,
from a user interface standpoint, the teachers probably look pretty simple on the surface.
You know, checkbox, this, this, that to make a few selections.
But, the key, there, is by building it into the application, it means it's now central
to the activities you're performing also known as the process.
That's exactly right.
Nobody wants to have to teach all of their employees how to be data scientists.
You don't want them interacting with complex statistical algorithms.
And what you want is you want them to do the work that their experts are doing.
You want our HR people focused on HR.
You want your finance people focused on finance.
But if you can make all of those applications that they're using in their day-to-day lives
more intelligent, more capable of helping them run the business correctly, then it's
great and it's embedded into the work that they're already doing so there's no learning
curve for them.
Okay.
So now, we're going through this story, and I want to remind everybody that you're
watching Episode #233 of CxOTalk.
And, we're speaking with Mike Flannagan, who is responsible for analytics inside the
important Leonardo product at SAP.
Did I say that right?
All right.
[Laughter] So now, we've got the data.
It's been anonymized, it's been aggregated, so now we can benchmark against it.
We've got the user interface to that data, and a nice friendly way inside the software
application; so it's a core part of the process.
And that application is being fed from that data store.
But, you mentioned this kind of magic term, "machine learning."
So, how does machine learning and other, let's say, I hate the term "artificial intelligence."
It's become like "digital transformation," it's sort of a catch-all phrase.
So, how does machine intelligence, machine learning change the way you now can use that
data?
Where's the magic?
Machine learning is a buzzword, and everybody's talking about machine learning.
Machine learning is a fairly horizontal capability.
What makes it interesting is data that will train a model that can then be used to make
better decisions.
Just to elaborate: When you say "train a model," for businesspeople out there, what
does that mean?
So, think of it like hiring a new college graduate and needing to teach them how do
to a job function in your business?
The more they do that job function, the more they learn how to do it.
The more they do it right, and somebody says "good job," that positive reinforcement
makes them do it that way.
The more they do something wrong, and somebody corrects them and shows them the right way,
the less likely they are to make that mistake in the future.
Fundamentally, the same concepts as machine learning.
If I want to take the very large dataset of historical data to predict what might happen
in the future, what I'm looking for is what are the things that have happened in the past
and what are the things that happen in the future, and where do I see very high levels
of correlation between the past and the future?
So, for example, Salespipe One data may be highly correlated to next quarter's revenue.
The higher my pipeline is, the higher my close rate is on that pipeline, the more revenue
I'll have next quarter.
The machine starts learning exactly how correlated that his; exactly how good of a predictor
of next quarter's revenue is this quarter's sales.
And, the more it learns, the more data you feed it, the smarter it gets.
The more accurately it can predict things; obviously, it's not a crystal ball, there's
always an opportunity for unforeseen things to change the future.
But machine learning; one of the powerful capabilities is about learning from the past
and being able to automatically apply that learning to estimate the most likely thing
to happen in the future.
So, we'll definitely dive back into that, but I also want to welcome Dion Hinchcliffe.
Dion is an industry analyst like myself, and truly one of the most influential analysts
among CIOs; and also hosts his own CIO-focused show on CxOTalk.
CxOTalk, absolutely.
Well, thank you.
Thanks, Michael.
Hi, everyone.
Hi, Michael.
So we've been talking about … You were just describing machine learning, and maybe since
Dion is here, where do IT and the business … Where do IT and the business fit together
in this whole landscape? […] one clarification.
So, I'm working with CIOs and the C-Suite in general.
There's a lot of excitement around what machine learning and artificial intelligence
can do.
The question is, that I'm getting more and more now, is, "now I'm going to hand over
my data to these learning algorithms.
What stops you from learning so much about my business?, and then that knowledge gets
transferred inevitably to the products of my competitors and other businesses.
So, how do I know that all that stuff it learns stays with me," right?
So data is the new gold, the new oil, as you guys were talking about.
How do organizations retain the control?
Well, I think the Number One thing in my mind, as you start asking that question was, it
starts with something that we talked about earlier; to recognizing the value of the data
that you own; recognizing that your data is an asset to be protected.
You don't take the buildings that your company owns and leave all the doors unlocked when
everybody goes home.
Same idea with your data.
You have to realize what data is valuable, what data is important, what data is proprietary.
And, take the appropriate steps to protecting that data.
And, sometimes, that means that you need to think very carefully about the aggregation,
anonymization process, to make sure that it can't be reverse-engineered; to make sure
that somebody can't de-anonymize data, for instance.
It's surprisingly easy to do since everything gives off data now, right?
So, there's a lot to correlate with.
Is all this data insecure, or is it underappreciated in terms of its real value?
I think it's underappreciated.
I think most of the companies that I've talked to recognize that there is value in their
data, but if you ask them to put it on a balance sheet, to put it on a bottom line, they couldn't
tell you exactly how to value their data.
And, that's a problem; because I can tell you exactly how to value my real estate assets.
I can tell you exactly the value of every employee in my enterprise, but I can't tell
you how much, what is being called now one of the most valuable assets of every enterprise
is actually […]. So Mike, given this, what are some of the
metrics that an organization that is undertaking a program of digital transformation, at least
when it comes to data, what are some of the metrics or the KPIs that they can use to evaluate
their progress?
How are we doing?
This conversation sounds very, right now, at the moment, and new, but a lot of the answer
to that question, I think, has been the same answer for thirty or forty years, which is
a lot of companies have a Garbage-in, Garbage-out problem.
If your data's wrong, it's not valuable at all.
And if you use incorrect data as training data for machine learning algorithms, it's
about to predict the future?
Your predictions are all going to be wrong.
So, a big KPI is data quality.
How good is your data?
How accurately is it inputted in your systems?
How well do you take data that's incorrect out of the system and out on a process?
So, I think that's a key starting point.
Because, if your data's not right, all these advanced technologies; all of these new techniques
from learning from data will not benefit you in a way […].
And so, this is a, shall we say, part of the … Is this – the correct terminology – part
of the software implementation process?
Well, it's part of a couple of things.
Obviously, data quality, there is software that helps with the process of data quality,
but the other thing is business process; making sure that you have a good process for data
being entered correctly, validated … On an ongoing basis.
On an ongoing basis.
The challenge, though, that we've heard from here is that speed is paramount these
days.
I surveyed 54 CIOs, top CIOs around the world, many companies, about how fast you have to
move.
And they all reported they're under very strong pressure to move much more quickly.
How can they take these quality measures when everyone's been asked to execute and deliver
as fast as possible?
Yeah, I mean, this is the problem.
Sometimes you have to slow down to go fast.
If you have a data quality problem, and you don't slow down to fix that, all of these
technologies that are going to help you go much faster are not going to help you go faster,
unless you're going faster in the wrong direction.
That's foundational.
It's foundational.
So, you really have to consider transforming the way you think about data from its origination
all the way through to its ultimate delivery of value.
And, if the origination of the data is flawed, then the whole rest of that supply chain,
if you will, becomes flawed.
Machine learning and artificial intelligence, it all sounds very new, but most of the advice
that I've just talked about goes back for a long, long time.
The fundamental processes haven't changed.
What is exciting now is that there are technologies that if you get those fundamental processes
right, can help you go at an incredibly accelerated pace.
Now, we hear about data scientists being so in demand, and you need to be prepared to
hire data scientists.
I think most businesspeople, they hardly know what a data scientist even does.
Most data scientists hardly know what a data scientist does, but if it's on your resume,
your salary goes up!
So, everybody's a data scientist!
So, how should businesses relate to, let's say, let's call it; and to all my data scientist
friends out there, I apologize for this; but how should businesses relate to the data scientist
problem?
So, I think, you know, if you think fundamentally about what your core business is, and you
make some decisions from there about how far away from your core business do you want to
learn something, versus where do you want to procure something?
In my core business, it's data […]. Having an army of data scientists in-house makes
absolute sense.
If I'm a retailer, I think there's a reasonable question about how much of that do I want
to do in-house?
But you need data science, so all of this is about data science.
So, what should we do; we businesspeople?
Well, the question is, do I want to hire them and own it in-house?
Or do I want to work for the firm who does that as their core competency?
Data science as a service, right?
[…] And I think the question is a core-versus-contexts
question, just like everything else.
Do I want to own in-house janitorial services?
Or, do I want to hire a janitorial services firm to come do that?
I think you can apply that to lots of different areas of your business about what is the core,
and what things do you need?
But they're not the business that you need to be […].
You know, we heard some great things about SAP Leonardo today, and you guys probably
already talked about some of this, but it seems like the packaging around that is really
designed to say, "Alright, so this is part of the N-to-N value chain that most organizations
have to realize.
Data is at the center, and the value it attracts is going to come from an increasing layer
of technology; so blockchain, to machine learning, to data intelligence, and so on.
If someone wanted to understand what SAP Leonardo does, how do you describe that in one sentence?
I'd say, first of all, there are a whole lot of technologies that are in Leonardo that
a hundred other companies will sell you as well.
So, the technology element is interesting and it's differentiated.
But, this isn't a technology solution.
It's a business problem solution.
So, if you look at Leonardo, the idea is once I solve a business problem, there are common
elements of my problem that apply to lots of other companies and lots of other areas.
So, we talked, I think, in one of the keynotes about an example that I mentioned here, which
is trains being outfitted with sensors for the purpose of predictive maintenance.
Eighty percent of what was done there would be interesting to a transportation company
who owned trucks, or a mining company that owned heavy machines, to be able to do the
same sort of predictive maintenance to minimize downtime and improve their operating costs.
And so, taking those common elements, and packaging them as industry-specific accelerators,
so that you as a CEO could identify a business problem and figure out very quickly how to
get them identifying that problem and implementing the solutions.
It's about accelerating that process in between.
So, it's a combination of technologies from the SAP portfolio that is aimed at a specific
industry or vertical issue.
And it's combined with a few services because what I've done is taken a hundred percent
solution for this customer, and generalized seventy or eighty percent of it.
And they get […], right?
We then need a few services to tailor it back to the next customer and the one after that.
And so, being able to do that lets us move from problem identification to implemented
solution about fifty percent faster.
So you've kind of systematized or productized some of the common technology elements, and
some of the common process and deployment aspects of it.
Exactly right.
So, there are business problems for seventy or eighty percent of common.
There are, if you look at those problems, technology solutions that are always going
to be seventy or eighty percent common; and it's taking those common elements of technology,
putting them together with the common approaches, the methodologies that are used to implement
them and [know] exactly what do I do with that sensor data to get it to reduce the operating
expense, and packaging that as an accelerated order.
In the old days, we used to call that "packaged solutions."
You must have industry … How do you break it out?
I mean, for industry; for banking; it's going to be attributes of this ... The underlying
technology may be the same, but certainly, the process aspects are going to be very different
than, say, retail or … And these accelerators are packaged by industry.
So, the recognition is certainly that business problems are fairly specific to industries.
There are some that can be generalized horizontally as more technology problems or process problems,
but the business problem if you really want to make it that repeatable, it has to be someone
specific to the user.
Probably the farther down in the technology stack you go, the more commonality there is.
And as you get closer up to the process and to the activities that people perform, and
how the data is ultimately used, I would suppose there it becomes much more differentiated
industry-by-industry.
That's right.
And I think as a consumer, you could argue that if you need to go buy a toothbrush, the
difference between Walgreens, CVS, Target, is not distinguishable for you.
But if you get into actually how they run their business every day, obviously each retailer
has things that are specific to them that are different from the retailers.
And those have to be considered in […]. So, a lot of talk these days about blockchain.
We're seeing more and more types of data used just to […] transact […] the blockchain.
And now we're hearing things like "identity," or things like SKUs or unique customer IDs;
all sorts of things are being thrown in there.
What's the blockchain story, in terms of the data and the analytics?
Now we have to talk about blockchain analytics, I guess?
The whole new generation of data?
What's your view on all of that?
Gardner has; I'm a big fan of research, and Gardner has something that I think's appropriate
and it's called the Hype Cycle.
And, it's a curve that all technologies; new technologies start to ride and at some point,
the expectations are, this technology can do everything, it can solve every problem,
it can slice it can dice.
And I think that may be a little bit of where we are with [Cooptator] right now.
There's a lot of potential.
What I think is really interesting is, which ones are going to land on real business value?
Which ones are really going to change a business model or business process?
And I think some of that's still going to be worked out.
But I love the fact that there's so much potential, and there's so much conversation
and people are trying things.
The key, I think, is fail fast with any technology that we're experimenting with.
Mike, we just have a few minutes left.
And, what advice do you have?
You're working with a lot of customers.
You see a lot of different businesses.
And what advice do you have for a businessperson who is looking at all of this and hearing
about machine learning, and all of this stuff: they're trying to figure out what to do.
So, what should they do?
My Number One piece of advice, and this is a little bit shamelessly associated to…
The approach we're taken with SAP Leonardo is [to] take one small step at a time, get
business value from that step, or fail fast and move on.
Don't try to solve every business model, every business process, every business problem all
at once with some giant tens of millions of dollars transformation project.
Start small.
Start small, find some quick wins, deliver some business value, and then do it again.
On the things that don't work, fail fast and fail cheap, and move on.
And I think that's probably the most powerful advice that I can offer, and that's the
design principle behind Leonardo.
And I know, Dion, we speak with lots of CIOs.
It's certainly great advice for any CIO.
Yeah.
Totally agree.
They get in the list as quickly enough and building the skills that are doing that, it
allows you to tackle and move […]. Alright.
Well, this has been a fascinating conversation.
We have been speaking with Mike Flannagan, Senior Vice President at SAP, and I'm so
thrilled that Dion Hinchcliffe, industry analyst, focused on CIOs, has come join us.
And of course, Dion has his own show on CxOTalk focused on CIOs.
Thank you, everybody, for participating today.
Mike Flannagan, thanks so much!
It's been great!
Thanks for having me.
And Dion…
Thank you.
Thank you, everybody, have a great day.
No comments:
Post a Comment