Focus on huge language models instead than AI consciousness.

A Google developer recently made worldwide news when he claimed that LaMDA, their chatbot development system, was sentient. Since his first essay, public discussion has raged over whether artificial intelligence (AI) is aware and feels emotions as strongly as humans.

While the subject is undeniably intriguing, it overshadows other, more immediate problems presented by large-scale language models (LLMs), such as unfairness and privacy loss, particularly for corporations rushing to incorporate these models into their goods and services. These dangers are exacerbated by the organizations deploying these models, often lacking an understanding of the precise data and procedures used to generate them, which may lead to prejudice, hate speech, and stereotype difficulties.

What exactly are LLMs?

LLMs are large neural networks that learn from big datasets of free text (think books, Wikipedia, Reddit, and the like). Although they are meant to create text, such as summarizing important papers or answering questions, they have been proven to excel at many other tasks, including website generation, medication prescription, and simple mathematics.

LLMs have been a critical field of study because of their potential to generalize to tasks for which they were not initially created. Commercialization occurs across sectors by customizing basic models produced and trained by others (for example, OpenAI, Google, Microsoft, and other technological corporations) to particular needs.

Stanford researchers created the term “foundational models” to describe how these pre-trained models underpin many other applications. But unfortunately, these enormous models come with significant hazards.

The disadvantages of LLMs

Among these dangers is the potential for significant environmental damage. One widely referenced article from 2019 discovered that training a single colossal model may create the same amount of carbon as five automobiles during their lifespan — and models have only grown larger since then. This environmental toll directly affects a company’s ability to satisfy its sustainability obligations and, more generally, its ESG ambitions. Even when companies depend on models educated by others, the carbon impact of training such models cannot be disregarded since a corporation must measure emissions throughout their whole supply chain.

Then there’s the question of prejudice. The online data sources widely used to train these models have been proven to be biased against various groups, including individuals with disabilities and women. They also over-represent younger users from rich nations, perpetuating that global view while diminishing the effect of under-represented people.

It has a direct influence on company DEI obligations. For example, their AI systems may continue to propagate prejudices even as companies seek to address biases in other areas of their business, such as hiring processes. They may also design consumer-facing apps that fail to provide consistent or dependable outcomes across regions, ages, or other customer subgroups.

LLMs may sometimes produce unexpected and frightening consequences, posing severe hazards. For example, consider the artist who used an LLM to recreate his childhood imaginary buddy, only to have his imaginary friend beg him to microwave his head. While this is an extreme situation, corporations cannot overlook these concerns, mainly when LLMs are used in high-risk sectors like healthcare.

These risks are heightened by the possibility of a lack of transparency into all components involved in establishing a contemporary, production-grade AI system. These might include data pipelines, model inventories, optimization metrics, and more significant design decisions in system-human interaction. Therefore, companies should not incorporate pre-trained models into their goods and services without carefully assessing their intended use, source data, and other factors contributing to the above hazards.

LLMs have great potential, and they may offer tremendous commercial benefits under the proper conditions. However, pursuing these advantages cannot imply disregarding the dangers that might result in consumer and social damage, lawsuits, regulatory infractions, and other company ramifications.

Responsible AI’s Promise

Companies exploring AI, in general, must have a robust responsible AI (RAI) program to guarantee that their AI systems are aligned with their business values. It starts with a broad approach incorporating guiding concepts, risk taxonomies, and a description of AI-specific risk appetite.

The governance and procedures to detect and manage risks are also critical in such a program. It includes clear responsibility, escalation and monitoring, and direct integration with more extensive business risk operations.

At the same time, workers must have systems in place to express ethical concerns without fear of retaliation, and these issues must subsequently be reviewed clearly and openly. A culture shift that aligns this RAI program with the goal and values of the firm enhances the likelihood of success. Finally, the major product development procedures — KPIs, portfolio monitoring and controls, and program steering and design — may all increase the chance of success.
Meanwhile, designing mechanisms for incorporating responsible AI knowledge into product development is critical. It comprises a systematic risk assessment process in which teams identify all crucial stakeholders, examine potential second-and third-order effects, and design mitigation solutions.

Given the sociotechnical character of many of these challenges, it’s also critical to include RAI specialists in high-risk operations to assist with this process. Teams also need new technology, tools, and frameworks to help them work faster while implementing solutions ethically. It comprises software toolkits, responsible development playbooks, and documentation templates for audits and transparency.

Leading from above with RAI

Business leaders should be prepared to convey their RAI commitment and practices to their employees and customers internally and publicly. Creating an AI code of conduct goes beyond high-level ideas, for example, to describe their approach to responsible AI.

RAI may be a significant benefit for businesses in addition to averting accidental damage to consumers and, more generally, society. Responsible AI leaders claim increased customer retention, market distinctiveness, faster innovation, and better staff recruitment and retention. In addition, external communication about a company’s RAI initiatives contributes to the transparency required to increase consumer confidence and reap these advantages.

LLMs are robust instruments that can potentially have a substantial commercial effect. But unfortunately, they also introduce genuine hazards that must be recognized and addressed. Corporate leaders may balance the rewards and risks to create a revolutionary product while reducing risk to consumers, workers, and society by taking the correct actions. However, we must not allow the debate around sentient AI to become a distraction from focusing on these critical and existing challenges.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top