More

    Using Responsible AI to Push Digital Transformation

    Just as its development is creating new business opportunities, AI has also delivered some harmful and ethically objectionable outcomes — a symptom of the misguided rush to get exciting technology to market superseding moral, and sometimes legal, responsibility.

    As with any new technology, failure is expected, but AI’s extraordinary development in the past few years is littered with so many cringeworthy incidents that a growing crop of studies, forums, conferences and think tanks have been established to address the ethical challenges and responsibilities that technologists face in shaping AI in our own image.

    Most famously, MIT Media Lab researchers Joy Buolamwini and Timnit Gebru found that facial recognition systems developed by Microsoft, IBM, and Megvii often failed to recognize dark-skinned people. It was not an isolated finding, yet big tech companies continued to market the software, including, most disturbingly, to law enforcement.

    Buolamwini and Gebru’s research of biased AI stands alongside other more overt manifestations of the technology going awry. Early in 2020, Tesla was under investigation by the National Highway Traffic Safety Administration for 13 crashes dating back to 2016 in which the HTSA believes the cars’ autopilot was engaged. And, last year, a chatbot designed to lighten the load of overworked doctors suggested a patient kill themself during a simulation. The bot’s language program, developed by AI company OpenAI, has been criticized for “generating racist, sexist, and otherwise toxic language which hinders their safe deployment.”

    Also read: The Impact of 5G on Cloud Computing

    The Historic Push for Responsible AI

    One of the key takeaways In the 2020 State of AI and Machine Learning Report, issued by AI development firm Appen, is that an increasing number of enterprises are getting behind responsible AI as a driver of organizational success, yet only 25% of companies report that unbiased AI is mission-critical.

    “Organizations are beginning to take a more holistic approach to AI initiatives with an emphasis on risk management, governance, and ethics gaining traction,” the report states. “This may be due to the increased C-suite visibility opening up discussions around responsible AI. As more companies deploy AI at a global scale, the need for AI to work for everyone makes data diversity and bias more prominent. 100% of respondents who rolled out their initiatives globally or to their full user base identified ethics, governance, or risk management as a lens used when thinking about AI.”

    This step toward global and enterprise-wide considerations of unbiased AI is both heartening and disappointing when placed in the context of the on-going battle to place ethics and morality at the center of AI development. That 25% of companies are actively looking to implement unbiased AI is good. Yet, as far back as 2016 — a year that saw Facebook and Google face criticism for algorithms that played a role in the U.S. presidential elections and the Brexit vote in the U.K. — technologists, scientists, and ethicists were all scrambling to contain what they saw as potential risks and threats raised by the rapid development of machine learning and AI.

    Over the past five years, the push and pull between AI development and calls for a more responsible approach to bringing AI to market has produced several dozen published ethical principles and guidelines. This document trove also charts the conflict between big tech companies and ethicists to get it right.

    We have arrived in this cultural moment, which human rights lawyer and Director of the Ada Lovelace Institute Carly Kind calls the third wave of ethical AI. The new movement of ethical AI is shaped by racial, social, economic, and environmental justice (“just AI”) and features a deep investment in understanding how the technology can be best applied.

    As the frameworks for responsible AI are refined, it becomes important that these scholarly moral discussions continue in lock step with the implementation of effective guidelines that go beyond lip service and haphazardly applied best practices.

    Also read: Can Immersive Technology Remake the Workplace Experience?

    Establishing Ethical AI Best Practices

    For organizations looking to build responsible AI into their operations, a few key approaches are emerging. With humanity at the core of many responsible AI frameworks, attention is fully paid to evaluating, deploying, and monitoring AI that continually address ethical questions backed up by company policies and programs that swiftly and successfully apply impactful solutions.

    Education and Training

    Creating a work environment in which all employees are versed in how AI works can organically help to develop systems that empower everyone in the organization to flag issues that lead to investigating and addressing problems that are in contravention of responsible AI goals. Incentivization can be a great tool here, such as creating metrics that reward employees for identifying problems and offering solutions to strengthen responsible AI across the company.  Employee AI training should also include:

    • How and why AI is integrated in the company’s operations
    • How AI affects and benefits their work
    • Helping employees to understand and activate AI insights to improve and deliver better project outcomes

    Governance

    A shift in company culture — one built on a proactive approach to implementing and interrogating the AI system coupled with rewards for keeping that system within a responsibility framework by reporting ethical issues without reprisals —  is a key step in wider adoption of ethical AI best practices. This also requires transparency throughout the organization underpinned by strong governance that can include AI review boards and outside industry experts and researchers, all of whom can identify blindspots and help to shape processes and guidelines that mitigate potential risks and harm.

    Design

    Thoughtful design of an ethical AI framework requires creation of a collaborative user interface that, from the outset, instills trust by accounting for privacy, transparency, and security in equal measure. The result is a system that uses explainable AI that maps decision-making rationale — giving c-suite teams insight into the AI’s computational work and a path to solving ethical problems immediately as they arise.

    Monitor

    Post-deployment human monitoring of AI must be continually done alongside frequent auditing of the system against established metrics that include accountability, bias, and security. By focusing on building processes and frameworks that identify and document data bias, inference results, as well as the implications of the bias, organizations can better address their AI systems’ bias and mitigate potential risks.

    Drawing Ethical AI Lines

    AI is the technological double-edged sword that sometimes comes along to challenge how humans will make progress while also adhering to mores that keep our humanity in check. AI helps, but it also harms. Plotting and understanding AI’s ethical ramifications might be difficult and it might even slow down the adoption of some aspects of the technology.

    However, our social contract demands that these conversations and these measures be taken seriously by big tech companies and business organizations who, by virtue of their wealth and power, are best positioned to see that AI’s progress does not come at the expense of our humanity. The steps noted above for developing ethical AI guidelines are a starting point for organizations to build an ongoing practice that, over time, will shape AI’s future as an aid and not a hindrance to our own ethical advancement.

    Read next: How AI Will Be Pushed to the Very Edge

    Llanor Alleyne
    Llanor Alleyne
    Llanor Alleyne is managing editor of a portfolio of enterprise IT and SMB technology sites, including IT Business Edge, Enterprise Networking Planet, and Small Business Computing. In an editorial career that has spanned nearly 18 years, Llanor previously held editorial leadership roles at Residential Systems Magazine, Digital Signage Magazine, and media company AVNation.TV. Previously the host of the Digital Signage Digest podcast, Llanor is committed to understanding the impact of technology on social mores and folkways. Her deep knowledge base includes audio/video integration, IoT/smart home, immersive tech, IT, and more.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles