Training for AI-Native

The Epistemologist
8 min readNov 13, 2016

Great software companies are anchored by people with a high tolerance for experimentation and uncertainty. The announcement by Originate CEO [my employer], Rob Meadows, that the company was officially “AI-Native in 2016” meant a perfect storm for Yasi Chehroudi, Senior Product Manager based in our Los Angeles office. With a background in education and curriculum design, as well as experience in leading Agile software development for some of the company’s larger, enterprise partners, she sees the chance to combine her skills and passions to help make that vision real in the broadest sense.

Yasi began working on an AI course curriculum with the goal of bringing the entire company up to speed on all things data science, machine learning and AI in a way that is relevant to our day to day business. Besides the subject, what drew me to Yasi’s program is her choice to test the curriculum on herself and a small cohort — the AI Alpha Course Team — as she was developing it, in true Agile development fashion. The costs of joining a education-development effort, of course, are some material may ultimately be thrown away; some projects may have low success-rates. Also, questions arise that are more suited to educators and academics than product developers and managers.

Chartered to help the company organize its learning efforts, the team spent their first few weeks aggregating information about state-of-the-art in AI, and reading through tons of potential material. Given that a portion of the team is non-technical, they discovered almost immediately, that most of the material found on the Web is not suitable to them. Specifically, all the deeper articles and examples online are geared towards engineers or data scientists. Says Yasi, “It takes more than an engineer to build a product — it takes a sales team that can speak intelligently about project requirements, product managers and designers that know how to discern customer needs, and define features, and a full development team to build, deploy and maintain the solution.” The team acknowledges the challenge— to build a training program that supports both technical and non-technical roles of AI-Native products, as they are delivered in reality.

A Curriculum in Artificial Intelligence for Designers and Product Managers is not going to come from binge-watching Westworld.

“The objectives of AI-Native products are molding and shifting as we go,” explains Yasi. “I spent some time with our CEO to discuss the outcomes we want. After all, there are already plenty of resources out there that individual team members can go to if all they want to do is learn about software engineering for AI. You can even leave your job, go back to school, and learn some practical skills in coding or integrating Tensor-Flow or Watson. But the result will probably lack real-world applicability, or endurance,” unless the non-technical aspects of the process are equally mature and incorporate the new paradigm of AI-Native development.

AI-Native means developing an eye for data and how it may or may not support an AI strategy. It also calls for new collaborative skills. AI-Native features change with time, with the acquisition of new data, usage patterns, and user-intentions. So also does user experience change, and therefore roadmaps for products become even more critical to charting direction and helping product owners plan for the future. Originate predicts broad AI-enabled acceleration in the emergence of new interactions, products and tools of development. The consequences will obviate current software, including SaaS products, at an increasing pace.*

Speaking about staying focused on products, “We don’t want to leave people [Originators and our Partners] on their own to learn. We want to keep them in the practical context of working on the products and platforms that Originate builds for the market, and itself.”

Yasi and Rob see Originate as in a unique position to develop people, and products, with “AI street smarts,” because the environment encourages creativity and scientific validation, and opportunity to apply learnings across a broad range of industry verticals and platforms. This contrasts strongly with the current theme in tech startups, which tends to reward singlular-focus on one AI-Driven model, like a recommendation engine for shoes, or cyber-security, where success is easily measured on the basis of purchases. By applying core techniques to a broad set of problems, Originators are seeing the strengths and limitations of the tools, becoming detectives of quality in AI.

“Unless we direct our colleagues to diverse, actionable work, we are not really benefiting the people, our partners, or the company. There is more to this than just the program content. It’s about how we talk about the work, too. It’s going to take time to effectively integrate all aspects of the program with the organization, but starting with this assumption [that it was for everyone] was important to us.”

The next challenge was time. We all have other work to do in the meantime, and not all of it is AI-Native.

“I would love if people could spend time every day on the coursework,” she comments, “but one of the important learnings is how to integrate professional development into our commitments and make progress consistent.”

The result has been that we’ve split off into groups of two or three, convening to share notes, once per week. Like an Agile sprint review, we talk about our approach, check our priorities and plans, but let individuals take ownership of progress. Yasi studies the group’s feedback, taking notes, and revising her materials.

Much to my relief, she doesn’t often put us on the spot. “I don’t ask for individuals to show progress, but I want to see them working together to achieve something, collaboratively.”

“I’m no longer Google-Searching Machine Learning and AI when I’m researching state of the art.”

Yasi continues, “I’ve had to pull myself back from the terms AI and Machine Learning. Much of this is really Data Science. When we say AI a lot of people think Ex Machina, Terminator or Westworld and that’s not really what we are doing in applying these techniques to partner products and platforms. It feels we’re putting the cart before the horse, often, using the term AI. What has to come first is data, and Data Science. Machine Learning and AI are deeper holes in that world.”

It’s by being self-conscious in this way, but not overly conservative, that we really get out of the AI-Winter mindset and take advantage of the powerful resources exemplified by Watson, Tensor-Flow, Alexa, etc.

DATA

The most frequent common subject in team meetings is Data. Historically, Originate didn’t need to concern itself with the large pools of data that products it built were generating. We left that to our partners. What we, and they, are learning is that not only the data is valuable, but is deeply intertwined with the features of the product, its design, even trajectory and go-to-market execution. One of the outcomes of our AI-Native products is to incorporate both the strengths of harmonizing data science and design and product, but recognizing how each limits, even overdetermines the other.

“When you make products smart, they become probabilistic.”

Yasi continues about limitations, citing accidents involving self-driving cars, and stories of people struggling to regulate their smart home-thermostats. “That AI learning curve still feels unintelligent to most people. When the product can pick up that the user is fighting with it, or it can self-observe and recognize its limitations, inherently, that’s when it starts to feel more intelligent to me.”

This might be annoying or even humorous at home, but could be dangerous when it comes to the air conditioning in a data center, or in the application of machine learning to insurance actuarial risk. “It might sound like much later, but in fact, we have to use the same tools to limit, monitor, and de-risk products while they are operating.” In other words, the developers still need to consider the limits of acceptable behavior, and the probability the learning will match the expectations of the user.

Discussing chat bots, she continues, “Bots have to recognize the context and circumstances of a conversation before they become useful. We can’t fully QA these things in advance. We have to monitor and test, in deployment, continuously, which is a product and delivery problem. When you build and train an algorithm, when is it safe to deliver to the world?” Or when is the probability of an AI-malfunction effectively zero? Earlier this year, in just such a circumstance, Nest had to rapidly deploy an update to their intelligent thermostats for customers caught in the cold.

“One of things we are not going to do, is tell a partner, yes we can make this AI-Native application for you right away, and then discover that no matter how well we know the domain, we don’t have the right data to make a training set.” The smart home will need to know when you’ve left for a week, or even update its program when your flight is cancelled and you’re headed back for the night.

A secondary benefit of internal AI-Native Education is we are learning how to articulate AI Product Evolution as a series of exercises of increasing specificity and reliability. Training our people in the tools of Data Science is informing how we draw AI-Native roadmaps for our partners.

Perhaps the top benefit, however, is the opportunity to enrich the scientific dimensions of software development. Progress in this field has been limited to creative execution to a large degree. Rewards in the last 10 years have gone not to the best coders and engineers, in many cases, but to those who can market their products fastest. With the added potential of AI, its dependence on data, and an effective process to integrate the collection of data with machine learning, comes a heightened necessity for precision and thoroughness, and continued dedication to testing. If bots can learn at the speed of the Internet, then so also must their parents and supervisors.

*q.f. “Seven things Enterprises Don’t Want to Hear About an AI-Native Future,” by Rob Meadows.

https://medium.com/@pete.swearengen/drawing-ai-native-roadmaps-533353c49661

--

--

The Epistemologist
The Epistemologist

Written by The Epistemologist

I create products that are good for people.

No responses yet