By Daniel Riedel and Andrew Storms

For all of the benefits of artificial intelligence (AI), the best minds, organizations and technologists in every corner of the globe will still struggle with one very significant challenge: How do we support AI at scale?

The current state of AI has clear limits. There are many different defined types of AI, from cognitive computing to neural networks, and each of these types will drive a unique set of demands. These demands will continue to strain our infrastructure as Google, Amazon, and others build out their environments at a frenetic pace. Google’s capex growth alone grew another 10% to $10.9 billion in 2016. That’s a lot of CPU and infrastructure to support our current computing demand, and yet, even without the impending strain of AI on our systems, we’re already struggling to keep up.

At New Context, we realize that we’re clearly still in the beginning stages of its evolution, especially given that there is so much discussion around how to even define AI. Our concern is how we can accelerate AI adoption in a secure way to advance the goals of businesses and critical infrastructure organizations, specifically, while protecting consumers’ data and privacy.

We’ve realized that the only way to successfully execute AI@Scale will be via a holistic approach, one that cannot be deployed without DevOps tools and processes to manage this all in a secure and safe way.

We see five key components that will be critical to AI innovation: edge computing, secure communications and messaging, infrastructure automation, data governance and attribution Here’s why.

Edge Computing

We continue to see the ebb and flow between centralized computing and pushing computing to the the edge. This tug of war has been in place since the original mainframes went mainstream in the 1970’s.

But for AI to be effective, we must leverage edge computing to empower the smaller real time decisions that are at the edge, and being done as close to the device as possible. Have you ever asked Siri a question and instead of an answer, gotten a highly sarcastic version of timing out? Edge computing will be required for AI to reduce latency. A self driving car simply can’t wait for the cloud.

At the same time, this proliferation of edge computing will create billions of software instances that will need to be managed. Managing anything in the billions isn’t easy, and to properly execute we’re going to have to rely heavily on automation, which we’ll tackle later in this piece. GE and IBM have both committed in a big way to edge computing, which is an encouraging sign that this will be an important area in the evolution of AI@SCALE.

Secure Communications and Messaging

By the very nature of having to get the data back from the edge into a centralized cloud environment, AI@Scale will require that communications be built on a strong, secure platform. We will have to wait and see whether current platforms can be applied to AI, where they will need to evolve to include full attribution, or whether an entirely new crop of secure messaging platforms will appear.

Regardless of platform, let’s say that we’re comfortable with the strength of that platform;now we have to ask, was it configured correctly? Secure communications across many end points require a fair amount of configurations and have the potential for many changes. Just because you enabled SSL on your website and devices doesn’t mean you’re not spilling data to other devices you’re not aware of. Passwords and certificates will need to change, certificates have to be rotated, and then we must ask ourselves how we manage and maintain all those configurations in an efficient manner.

It’s complex. Luckily, DevOps tools and processes help to solve those challenges today and will be very useful as we look to securely communicate within AI@SCALE.

Infrastructure Automation

You have to address the bare essentials of infrastructure automation — the ability to automate the standing up and down of resources, the install of OS and applications, and how all of these components talk to each other — so that more resources can be turned to the complex issues of AI.

You’re going to need the basics covered because the global market for smart machines was valued at $5.3 billion in 2013 and is expected to be $15.3 billion in 2019. And if you’re GE, that number is a small drop in the bucket compared to the trillions of dollars in GDP that will be powered by the Industrial Internet of Things revolution. What will be at the core of everything from smarter toasters to the IIoT? AI. As nearly every organization looks to take whatever product they create and make it “smarter” with AI, we’re looking at an astronomical demand, and correlating complexity, The bottom line is that smarter, AI powered devices will be nothing without an infrastructure that can scale up and down to handle demand in a secure, cost-effective way.

Companies are already solving for right-sizing their systems, and autoscaling is already critically important. This is a place we do not need to reinvent the wheel because DevOps tools have solved for this. There will likely be some tweaks based on the mechanics, and the decision tree will look different, but the tools to scale up and down will be similar.

Infrastructure automation will be critical in order to evolve into AI@SCALE.

Data Governance

What’s most alarming about data governance is that it’s something people are currently not concerned enough about, and it will only become more critical with AI@SCALE.

If an organization is forward thinking enough, or navigating a highly regulated area like HIPAA, ITAR, or NERC, data governance is likely top of mind. But they may be looking at things the wrong way. They’re focused on security of data, customer data, and company data. When it comes to AI, they’re focused on how AI can support data governance, not what new data governance will be needed for AI.

There will be an entirely new set of things to govern with AI. We’ll need to consider data governance as it relates to the outcome of the workload. This will include the patterns that are found and the algorithms that are being used. Imagine a day where facial recognition technology has been simplified to several algorithms and a drop-down menu. It might be easier than ever to execute accurate facial recognition, but what happens when the subject is under 18? What rules will apply to what can be collected and shared?

AI will open up an entirely new world of data governance issues, and it will be critical to apply what we have learned previously to effectively manage the new frontier.

Attribution

We’ve talked before about the need for attribution in critical infrastructure, and within AI, it’s even more important. Accountability is only enforced by attribution. Every machine that runs AI needs a full trace of everything that’s happened. While we may court privacy arguments in other areas, we can all likely agree that whatever process makes a self driving car decide to brake or accelerate should be fully attributed.

In conclusion, it’s a mark of maturity for organizations to use DevOps tools and processes to power a holistic approach like Lean Security. This very same approach is the only way for us to achieve the future of AI@Scale in a safe way.