If ‘Weaponization’ means to use AI for nefarious purposes, then it is up to society to enact laws to constrain those efforts.
Know My Role
Tell us about your interaction with AI and other intelligent technologies that you work with in your daily life.
There’s this prevailing suggestion that AI is only beginning to emerge, but in reality it’s everywhere. Alexa gives me my morning briefing each day tailored to my likes and interests. Android auto successfully routes me to work every day, learning my preferences and drive style. My phone identifies all of the human faces in the picture, with astounding accuracy. There are AI-based recommendation engines on many if not most ecommerce sites that are genuinely helpful. All of us work with AI already, to a much greater extent than we realize.
How did you start in this space? What galvanized you to co-start Alegion?
I’ve always had a passion for using technology as a vehicle for creating new kinds of workplace opportunities. We started Alegion nearly seven years ago as a company that could bring a mix of technology and people to bear on big data challenges. We cut our teeth on gigantic content moderation initiatives and very large scale image annotation projects. Supporting these endeavors drove us to build a superb data and task management software platform, and a sizable, global pool of on-demand data specialists.
About 18 months ago, we realized that virtually all our inbound inquiries were about AI training data. Sensing that this was a great opportunity, we shifted the company’s entire focus to AI enablement and acceleration.
How do you differentiate Alegion from other AI-as-a-service/ data-as-a-service (DaaS) providers?
We aren’t an AI-as-a-Service provider. We use AI in our own platform but we don’t sell AI technology and we aren’t a DaaS provider either. Our business involves labeling and annotating client data so that it can be used to train their ML algorithm.
How do you see the raging trend of including ‘AI in everything’ impacting businesses?
The raging trend has definitely been good for trade show and conference businesses. We go to a lot of events, and they are all overflowing with people who are trying to figure out what AI is, what it means to their own businesses, and how it affects their careers.
The trend has also been good for recruiters who work with data scientists and machine learning software engineers, because people with these skills represent one of the hottest sellers’ markets there is.
Alegion works primarily with Fortune 500 and Internet 100 organizations. In our experience, many if not most of these businesses have reacted to the “AI in everything” trend by standing up one or more dedicated AI labs, staffed with data scientists and IT personnel. For the most part they have established aggressive goals for themselves, with aggressive timelines. At the same time, their approach to their early AI projects is fairly deliberate, which is in keeping with their overall company size and culture.
The companies outside of the Fortune 500 that we’ve dealt with often lack the resources to attract a critical mass of technical talent. Many of them are inclined or forced to rely on commercial algorithms and off-the-shelf tools and data. Whether this will work for them really depends on their use case and their model confidence requirements.
What are the biggest challenges and opportunities for AI companies in dealing with inflating technology prices?
Both labor and infrastructure costs will always inflate to capture the premium afforded in an exploding market. Opensource software, cloud infrastructure, and offshore talent are the typical methods for curbing costs. Each of these has drawbacks that can eat into the upside opportunity of the market. Presently there is more of a focus on providing the highest value to the customer than price pressure to do it as cheaply as possible. This will not always be the case as AI becomes ubiquitous.
Tell us more about the Alegion training data platform? Which set of data analysts are best suited to benefit from the training?
When we talk about training data we’re referring to the data that is required to train a machine learning algorithm. In order to be able to “see” and “understand” what’s in a photograph, a computer vision algorithm needs to be exposed to many thousand or even many millions of photographs, all of them labelled and annotated in ways that help the algorithm learn.
If we want an algorithm to recognize the bridges in photographs, we have to show it thousands and thousands of photos where the bridge is clearly marked and labelled.
The act of marking and labeling images in this way requires human judgement today.
During the early stages of an algorithm’s education, humans are needed to virtually spoon feed the algorithm. As the model gets smarter, humans remain in a supervisory role, training the algorithm on its mistakes. And even after the model is in production it’s typical to have humans in the loop to resolve edge cases that the model can’t resolve on its own.
As you can imagine, when you have a lengthy, iterative training process that involves–
- enormous volumes of sometimes very complex data
- potentially thousands of human “trainers” who are layering structure on the data through computer-based tasks, and
- constant checks on both machine and human accuracy, you need a technology platform to manage everything.
I’ve just described the Alegion Training Data Platform.
How should young technology professionals train themselves to work better with AI and virtual assistants?
Honestly, I think there’s little likelihood that we’ll need to learn to work with bots. As I said earlier, we already interact with AI in countless ways, without realizing it.
I actually think the big challenge ahead of us lies in teaching our AIs and virtual assistants and bots how to work with each other. Integrating software programs has always been hard, and we’ve solved the problem by forcing the use of inflexible, hard-wired APIs.
Traditional APIs will not work in a world where two software programs have to interrogate each other, negotiate with each other, and just generally get along.
How do you consume information on AI/ML and related topics to build your opinion?
Mostly I listen to customers.
We are in a unique position to see a broad array of customer experiences (successes and failures) where AI is being applied.
We have a front row seat to the AI revolution as industries are leveraging AI for the first time. Manufacturing, healthcare, retail, defense are just some of the sectors where we have learned new applications for AI.
What makes understanding AI so hard when it comes to actually deploying them? How do you manage these challenges at Alegion?
Many of our clients first contact us after they’ve attempted to create their own training data. The first thing they tell us is “We’ve spent 80% of our project budget, we have run long past the project deadline, and our model is nowhere near the 98% confidence level we need to realize ROI.”
They cannot deploy their models because the models don’t work well enough. They don’t work well enough because their algorithms have been inadequately trained. And the algorithms have been inadequately trained because they’ve been exposed to either too little, or inaccurate, training data.
Our entire business is about managing this challenge. When we get engaged with clients we gather detailed information about what the algorithm needs. We also make sure that we set up the project in ways that meet the client’s data security requirements. Once we understand the client’s data and how it needs to be labelled and annotated, we create a task structure that will meet those needs. We select a worker pool with the skills and work history that the project demands. We distribute tasks to the worker pool. We monitor accuracy and efficiency. We test, and we devise means of adjudicating and measuring progress. When we’ve reached an accuracy level that will produce the required model confidence, we deliver the training data to the client.
Which is harder – choosing AI or working with them?
We are seeing a commodification of AI models. The actual differentiation is in the application of the model and the level of training. Choosing AI is becoming about preference and ecosystem rather than feature differentiation.
How potent is the Human-Machine intelligence for businesses and society? Who owns machine learning results?
The importance of training goes beyond basic quality and efficiencies. Humans must train the machines to follow the social constructs just as we have done for our own children. Basically, it is the human training that teaches a machine right from wrong. That may sound overly philosophical, but when the machines are determining which content you see in your news feed, or what treatment options are available to your medical condition, these philosophies become important and relevant.
Who owns the learning is an interesting question that will play out more specifically in the public sphere and courts in the future. Today, it closely follows big data licensing norms so companies employing their own solutions will own their training data and model improvement. However, when third party services are used (such as online photo galleries, navigation systems, and social media), we license our content to the service provider in exchange for the use of the service. The service provider in those cases has license to train their models and retain the learning.
Eventually, I believe there will be a democratization of our personal data licensing and individuals will receive micro-payments whenever AI uses their content for improvement.
Where do you see AI/Machine learning and other smart technologies heading beyond 2020?
More AI in more places and contexts. Superb computer vision capabilities. More and more machine understanding of human nuance in speech and behavior. And advances far more rapid than we expect.
In our own field, we fully anticipate that humans will have an ever-smaller role in the training of machines. More and more, machines will be trained by other machines.
The Good, Bad and Ugly about AI that you have heard or predict –
We are witness to the ugly every day. The lack of good training data is a major obstacle to the deployment of AI systems everywhere.
What is your opinion on “Weaponization of AI”? How do you deal with the challenge here?
Weaponization is a relative term that means different things in different contexts. Businesses are right now trying to weaponize their data for the purposes of differentiating and eventually eliminating competitors. Organizations can also use AI, just as they do with any other technology, to promote their ideals and attempt to gain advantages. If ‘Weaponization’ means to use AI for nefarious purposes, then it is up to society to enact laws to constrain those efforts. It should not be left up to tech corporations to self-govern the use of technology in society.
The Crystal Gaze
What AI start-ups and labs are you keenly following?
I am always watching for companies that are applying data science to tackle in specific verticals such as RealMassive in commercial real estate and Drishti in manufacturing.
What technologies within AI and computing are you interested in?
We deal with AI disciplines that involve unstructured data. Most of our clients bring us projects in computer vision, natural language processing or entity resolution.
As a tech leader, what industries you think would be fastest to adopting AI/ML with smooth efficiency? What are the new emerging markets for AI technology markets?
Among the Fortune 500 we see the most rapid adoption of AI and ML in retail, healthcare, financial services, manufacturing and defense.
What’s your smartest work related shortcut or productivity hack?
The F4 key applies an absolute reference to a selected cell in Excel. Do you know how much of my life I’ve spent inserting dollar signs?
But real productivity gains- probably my AWS workspace. I can step into my secure desktop from anywhere in the world and have the same applications and experience.
Tag the one person in the industry whose answers to these questions you would love to read:
Prasad Akella, CEO Drishti
Thank you, Nathaniel! That was fun and hope to see you back on AiThority soon.
Nathaniel Gates is career technology worker and entrepreneur focusing on the Cloud Computing and Cloud Labor spaces. Nathaniel co-founded Alegion in 2012 and now serves as its CEO and an industry evangelist. Prior to Alegion, Nathaniel founded Cloud49, a successful cloud computing solutions provider focused on the public sector. Nathaniel has a passion for providing next generation work opportunities to people around the world who demonstrate a willingness to work hard for themselves and their families. Nathaniel lived and worked in Alaska for 36 years prior to moving to Austin, Texas with his family in 2012. Nathaniel greatly enjoys this warmer climate with his wife Wendy, and their three children Caleb, Titus, and Lauren.
Alegion’s full-service platform accelerates enterprise AI model development and validation through the delivery of large-scale, high-quality training datasets. We configure tailored data workflows and integrate legions of trained data specialists with machine intelligence to deliver accurate training data that continually augments and validates your predictive models, so you don’t have to.
Based on our experience delivering large-scale data solutions for Fortune 500 companies since 2012, our platform integrates managed human intelligence with AI systems and ML-augmented quality controls to power enterprise-class AI solutions in the retail, automotive, government, security, medical, and financial service industries.a