Last updated: 12 December 2023.
The world is witnessing a Cambrian explosion of AI tools that can do almost anything, from creating award-winning art to writing academic articles and crunching advanced equations.
Mass AI adoption is now a matter of “when” and not “if”. In 2022, around 91.5% of leading companies actively invested in AI, and 75% integrated AI into their business strategies, according to Accenture.
While it’s nigh-impossible to avoid being swept up by AI and its dizzying potential, startups need to take a deep breath and plan their AI-supported strategies carefully.
Here’s what startups need to know about responsible AI.
To use AI responsibly, it’s first crucial to understand why ethical debates surrounding AI exist in the first place. There are many examples of “AI gone wrong” to list, but here’s a snippet of high-profile examples:
Discussions of AI’s shortcomings quickly get gloomy, partly because the potential for good is so great - it’s disappointing when we don’t fulfil that potential. Of course, AI adoption has many upsides, but it’s something we must strive for rather than take for granted.
AI models are fallible as the data used to train them is rarely perfect.
For instance, GPT-3 is trained primarily on data retrieved from the internet. It makes sense - the internet is the largest data source on the planet, and most of it is free.
However, the internet remains a relatively new invention, it’s primarily written in English, and only half the world’s population uses it, let alone contributes to it. Not only does it have blind spots, but it also inherits bias and prejudice from contributors.
Treating the internet as a panacea for knowledge is risky, and models trained on internet data represent the internet’s blind spots and biases.
We often imagine AI to possess superhuman judgment and objectivity, but this isn’t the case. It’s only as fair, just and objective as it’s designed to be. There’s also been a lag in producing the datasets required to train accurate and effective models.
For example, some facial recognition ‘gold sets’ (datasets deemed among the best in their class) are heavily weighted towards white men.
This is partially to blame for poor performance among some facial recognition AIs that repeatedly fail individuals who aren’t white.
Similarly, data used in failed recruitment AIs reflected a time when women and other minority groups were underrepresented in the job roles they were trying to fill.
Researchers and AI ethicists have been keen to point this out, as building a public consciousness of AI’s potential blindspots is essential to emphasise it as a force of good.
Ultimately, AI is modelled on organic structures, i.e. the human brain. The human brain is far from infallible; thus, neither is AI.
Well, if startups invest and use AI that “goes wrong," they’ll be culpable for the consequences, which could be both financial and reputational.
Moreover, startups are more vulnerable than big companies as they tend to lack dedicated AI ethics departments and have to manage due diligence internally alongside many other tasks.
For startups, building an overarching understanding of what responsible AI looks like is essential.
A widely cited paper by AI ethicists at Oxford University highlights five key principles for responsible AI.
AI must do something good, such as preventing the spread of infectious diseases, analysing pollution to improve air quality, automating potentially dangerous tasks to reduce human injury, etc.
In a commercial context, benevolent AI can streamline tasks to reduce time-consuming manual labour or improve upon human decision-making.
By doing good, AI mustn’t inflict malevolent effects or side effects. For example, an AI shouldn’t manipulate financial markets to earn money if the consequence is eroding people’s incomes.
AI designed to replace human decision-making must remain conscious of the cost, i.e. loss of jobs and human productivity. After all, if AI takes everyone’s jobs, people won’t have incomes, and governments won’t be able to raise money through taxes.
AIs should be explicit and auditable, e.g. they should be able to explain why they came to a decision. No AI should be a ‘black box’, meaning we only see inputs and outputs and not what’s going on inside the algorithms.
AI should embed the social values we want them to reflect. If we train models on historical data, we should expect them to reflect historical values, which aren’t desirable regarding diversity and inclusion.
The autonomy given to machines should be restricted, and crucially, reversible. AI should be predictable, e.g. it does what we expect it to do and intend it to do.
Even ardent futurists like Elon Musk worry about what will happen when AI is relinquished from human control to act with total autonomy.
Treat AI investment the same as any other by ensuring tools and their uses align with your company's culture and governance style. Keep that tech stack under control and avoid building a sprawling, uncontained stack of AI tools.
AI is a social movement as well as a business movement, and humanity must work collaboratively to control it. If we do allow it to spiral out of control, the stakes could barely be higher.
Professor David Shrier at Imperial College London sums this up well:
The costs of failing to responsibly deploy technologies are existential, not only for individual organisations, but for entire countries.
Startups are set to play a pivotal role in steering AI usage in the direction of ethical and responsible use. Responsible AI usage doesn’t just ward off the negative consequences of “AI gone wrong”, but it’s also a marker of sound business governance.
If a bank can make 5% more for its shareholders through automation, is it obliged to do so? What if that comes at the cost of thousands of jobs? Are there other consequences?
These types of questions are very real, and striking a balance between automation and its consequences is exceedingly tricky. Startups have the opportunity to plan for these problems before they approach a critical mass.
Consider the implication of automation; job losses, low morale, unsatisfactory work, lack of clarity and loss of talent are all known side effects. Just because an AI can replace someone, should it? And can it really do the job better?
One way to negotiate the above issue is to harness AI’s additive benefits. Rather than automating tasks to remove human input, use AI to scale up people’s skills.
Chat GPT is an excellent example of this. Soon, we’ll be able to complete a wide range of labour-intensive using prompts, from writing articles to designing web pages or even building other machine learning models.
The results are impressive, but they become much more substantial when combined with human input. Chat GPT and associated apps will raise the bar and set a new, higher standard for genuine human work.
After all, if everyone can use AI to complete a task, businesses will need to find new ways to differentiate themselves. Responsible AI usage is a valuable component of that process.
AI is evolving rapidly, which is risky for the thousands of businesses that are building workflows that wholly rely on it. Startups shouldn’t throw everything behind a single tool just to have the rug pulled from under their feet.
For example, in February 2023, Chat GPT suddenly added a Chat GPT Plus option for $20 a month with privileges over the free version. You’ve got to ask, what developments are forthcoming, and what will they change for businesses?
Will OpenAI, Google, etc, permit any and all businesses to use their products at their will? What happens when regulations shake up the AI industry?
Right now, a handful of major players hold the aces, and their monopoly makes AI investment precarious.
In 2021, the European Commission released a proposal for Artificial Intelligence Regulation, “a proposal for a Regulation laying down harmonised rules on artificial intelligence.”
On December 8 2023, European Union lawmakers made headway with the EU AI Act (the world's first serious move to govern AI).
The EU AI Act aims to regulate AI use in Europe, setting rules for AI tools like ChatGPT. Although the full text won't be out until 2024, startups can prep for compliance.
The UK Government on the other hand doesn't intend to introduce new legislation. Keen to "establish the UK as an AI superpower", in a white paper foreword, the Rt Hon Michelle Donelan MP describes a different approach:
We set out a proportionate and pro-innovation regulatory framework. Rather than target specific technologies, it focuses on the context in which AI is deployed.
In other words, watch this space!
Startups should consider who’s in charge of their AI tools and how they can implement security measures to govern sensitive data.
When regulations come into force, AI vendors might not necessarily comply with them by default, meaning businesses will need to conduct their own audits. To govern and manage AI internally, ask the following questions.
Screening AI will accompany other types of risk assessments. Consider creating a written AI risk management strategy, write down all the tools you use, their potential security risks etc.
And if you haven't already, start drafting an internal AI usage policy and facilitating training for your team.
Transparency, accountability and education will take centre stage as AI innovation forges on into the future. Governments and regulators are desperate to regulate AI to control and harness its usage – startups need to stay ahead of these developments.
While big businesses have the clout and money to invest in AI ethics departments and dedicated functions for AI responsibility, startups can’t always afford the privilege.
Planning a careful, considerate and transparent approach to AI investment and usage will win the day. And don’t forget to keep tabs on the latest developments in AI laws and regulations!
We're on a mission to help startup founders and their teams succeed. That's why we've created a suite of equity management tools. Discover what Vestd can do.