More needs to be done to regulate the latent potential of faulty or biased AI algorithms that sow distrust and fear.

From diagnosing cancer to understanding climate change—AI promises to transform the world. However, anything that wields this type of power needs checks and balances to guard against its potential harms.

Some of the key ethical dilemmas around the adoption of AI include job security, data privacy, race biases, and safety concerns. Some publicly-documented failures of AI include gender and race biases in credit approval software; racist chatbots, and driverless cars that failed to recognize road signs. More terrifyingly, an app last year used AI to ‘undress’ women in photographs to produce realistic nude images.

As more companies seek to operationalize AI by 2024, the time to gloss over AI’s shortcomings is long over. The onus is on us to ensure that this transformative solution is not used to serve malicious ends. How?

  • Implementing an AI framework
    Businesses looking to adopt or develop AI solutions must first ensure that they have a strong system of checks and balances in place. This means working with a certain set of guidelines while creating or deploying AI applications.

    At a very basic level, this includes being lawful, transparent and aligned with basic social values. Beyond this, the framework should also protect human autonomy and ensure that AI applications are robust enough to prevent intentional or unintentional harm.

    While there is no worldwide standard yet for the ethical realization of AI, technology leaders such as Google and data consultancy firm PwC have developed their own set of rules guiding developers of AI applications. These are a good starting point for businesses who want to responsibly harness AI. At iKala, we strongly reinforce these standards in our research and product development.
  • Leading by example
    The development and application of AI has been a major flashpoint for big tech companies. In response, many including Microsoft, Google and IBM have begun self-regulating to come up with a set of guidelines. However, to ensure that big tech companies are not the only ones leading the debate, emerging industries must become involved in AI’s ethical and social discussions. At iKala, for example, we work with some of the country’s best academics to ensure that our AI tools and security features are built responsibly.

    To ensure greater accountability, some experts have suggested an independent audit process that creates an inventory of all machine learning models, their use cases and risk ratings—to determine the associated social risks.

    Additionally, more governments and citizens must become actively involved, too, in setting the AI agenda. The EU’s establishment last year of AI guidelines—including technical safety, accuracy, bias and transparency of algorithms—was a welcome first step.
  • Maximizing efficiencies
    The core purpose of AI is to simplify human life, not replace them.

    Thus, the technology should be designed to work with humans, easing their load and improving efficiencies. We believe AI solutions should be labor-enabling rather than labor-replacing. AI can be used to manage certain parts of the sales process, but the true result of this intelligent automation is to enable our clients to focus on engaging customers and growing their brand.

For AI to deliver on its promise, it needs to be more than just effective—it needs to do the right thing.