the bucket

Responsible AI Use: Lessons from an Industry Insider

Responsible AI Use: Lessons from an Industry Insider

Aaron Edell
By Aaron Edell
Senior Vice President, AI & Innovation

April 23, 2024

When I was in high school, I got a ‘job’ working at my dad’s office. He was a medical journalist and news reporter and, as such, had endless video packages and stories on videotapes. My job was to watch some of these tapes and log everything that happened. A five-minute edited segment would take over an hour to log. I know that sounds insane, but try it for yourself. All I had was a pen and a blank log sheet. I’d watch a few second, pause, write down everything that was said or summarize what was going on, press play, rinse, repeat. I remember how shocked I was at how long it took to accomplish such a simple task.

As the years went by and I pursued a career in tech, that pain of logging those tapes never was far from my mind. In 2015, I finally got the chance to address that pain when I joined a startup called GrayMeta as its first employee.  

My goal was to figure out a way to automate as much of that process as possible. Imagine your entire archive of video content automatically logged and tagged for you. You’d be able to find any moment you wanted in seconds, and you’d never lose anything ever again. This was the dream.  

In 2016, I saw a demo of Microsoft’s Cognitive Services which would take any audio and transcribe it into text automatically. The accuracy was poor, and it was expensive as hell, but the potential was enormous. I directed the team to integrate that service right away. Soon thereafter, other ML services became available for API consumption: face recognition, optical character recognition (OCR), image classification, etc. We integrated them all. Now, we had a platform that could create a robust metadata record for every piece of video you owned.    

In 2017, I co-founded a startup called Machine Box where we made our own machine learning using the latest research and open datasets available on the internet at the time.  Machine learning was really hot in the market, and questions around ethics and the bias of training data were starting to swirl. Up until that point, I hadn’t really thought about the training data or the ethical considerations of false negatives or positives. But these things had serious implications in our world, from security and law enforcement to cancer detection and self-driving cars. In my mind, I only ever saw ML as a way to make my monotonous videotape-logging job easier. I cared about the technology’s potential to make a better world for us all, so I dug in and learned what I could about these issues.  

In turned out, there were all kinds of issues, the biggest of which at the time was bias in training data. For example, systems that turned words into vectors so that we could map how similar or close together certain concepts are, were trained on news articles spanning decades. Those articles, as an average, contained the biases of the times. The word ‘doctor’ was mathematically closer to ‘man’, and the word ‘nurse’ was mathematically closer to the word ‘woman’. The models trained on this data would inherently reflect this bias.   

You would see the impact of bias in training data when it came to how accurate face recognition systems were across different populations of humans, or the overwhelming preponderance of speech-to-text capabilities in English with few to none available in other languages. Other issues came to light as well, such as the use of face recognition in law enforcement or government.   

As an eternal optimist, I believed we could address these issues, and still get the overwhelming benefit of these technologies. For example, I love my Clear membership at airports which uses various eye and facial recognition techniques to identify me and allow me to skip the security line. I explicitly gave Clear permission to use my biometrics for this purpose. In another, albeit similar example, my passport photo was used to train a face recognition system that lets me board international flights faster. For me, this has added convenience to an otherwise monotonous task (in this case, waiting in lines at airports).  

In 2023, GrayMeta asked me to return as CEO. One of the first things I did was instruct our teams to build our own ML models for use with our platforms. This came with a lot of advantages, one of them being control over what training data is used for our ML models. Another is that we have more of a say in how our technology is used in the world.  

For now, we’re focused on those libraries of video that many organizations have that are basically dark data because they’re not tagged, not discoverable, and not accessible. I take pride in the fact that we’re making people’s jobs so much more productive. As we expand to other use cases, the underlying problem we’re solving stays the same. How do I find, access, and make use of vast archives of content that exist in many different places in my enterprise? Machine learning is the only way you will be able to address this at scale, without hiring an army of teenaged Aaron Edells out there to intern at your company and manually tag your content. The open data sets, weights, and research we use to build our models are all state-of-the-art and, although not perfect, have significant reductions in harmful biases that were more prevalent in 2017 and 2018. Since we have better control over our models, we’re always working to improve their accuracy and performance, including cleaning training datasets to be more representative of the content we see in the world.   

As we look towards this next wave of ML technology, mostly centered around the concept of Generative AI, we’re met with new challenges around responsible use of the technology, but our goal stays the same. It is all about helping that worker find what they are looking for in their storage. Or enabling automation that makes their lives a thousand times better.  

Using AI responsibly is a decision you make every day. It is an ongoing commitment to how you operate and impact the world. I hope you’ll join me in our journey at Wasabi to make everyone’s jobs easier to perform and a lot more productive.  

artificial intelligence
the bucket
Aaron Edell
By Aaron Edell
Senior Vice President, AI & Innovation