Connect with us

Business

Building Transparency into AI Projects

Published

on

Building Transparency into AI Projects

As algorithms and AIs become ever more embedded in people’s lives, there’s also a growing demand for transparency around when an AI is used and what it’s being used for. That means communicating why an AI solution was chosen, how it was designed and developed, on what grounds it was deployed, how it’s monitored and updated, and the conditions under which it may be retired. There are four specific effects of building in transparency: 1) it decreases the risk of error and misuse, 2) it distributes responsibility, 3) it enables internal and external oversight, and 4) it expresses respect for people. Transparency is not an all-or-nothing proposition, however. Companies need to find the right balance with regards to how transparent to be with which stakeholders.

In 2018, one of the largest tech companies in the world premiered an AI that called restaurants and impersonated a human to make reservations. To “prove” it was human, the company trained the AI to insert “umms” and “ahhs” into its request: for instance, “When would I like the reservation? Ummm, 8 PM please.”

Advertisement

The backlash was immediate: journalists and citizens objected that people were being deceived into thinking they were interacting with another person, not a robot. People felt lied to.

The story is both a cautionary tale and a reminder: as algorithms and AIs become ever more embedded in people’s lives, there’s also a growing demand for transparency around when an AI is used and what it’s being used for. It’s easy to understand where this is coming from. Transparency is an essential element of earning the trust of consumers and clients in any domain. And when it comes to AI, transparency is not only about informing people when they are interacting with an AI, but also communicating with relevant stakeholders about why an AI solution was chosen, how it was designed and developed, on what grounds it was deployed, how it’s monitored and updated, and the conditions under which it may be retired.

Seen in this light, and contrary to the assumptions about transparency by many organizations, transparency is not something that happens at the end of deploying a model when someone asks about it. Transparency is a chain that travels from the designers to developers to executives who approve deployment to the people it impacts and everyone in between. Transparency is the systematic transference of knowledge from one stakeholder to another: the data collectors being transparent with data scientists about what data was collected and how it was collected and, in turn, data scientists being transparent with executives about why one model was chosen over another and the steps that were taken to mitigate bias, for instance.

As companies increasingly integrate and deploy AIs, they should to consider how to be transparent and what additional processes they might need to introduce. Here’s where companies can start.

The Impacts of Being Transparent

While the overall goal of being transparent is to engender trust, it has at least four specific kinds of effects:

Advertisement

It decreases the risk of error and misuse.

AI models are highly complex systems — they are designed, developed, and deployed in complex environments by a variety of stakeholders. This means that there is a lot of room for error and misuse. Poor communication between executives and the design team can lead to an AI being optimized for the wrong variable. If the product team doesn’t explain how to properly handle the outputs of the model, introducing AI can be counterproductive in high-stakes situations.

Consider the case of an AI designed to read x-rays in search of cancerous tumors. The x-rays that the AI labelled as “positive” for tumors were then reviewed by doctors. The AI was introduced because, it was thought, the doctor can look at 40 AI-flagged x-rays with greater efficiency than 100 non-AI flagged x-rays.

Unfortunately, there was a communication breakdown. In designing the model, the data scientists reasonably thought that erroneously marking an x-ray as negative when in fact, the x-ray does show a cancerous tumor can have very dangerous consequences and so they set a low tolerance for false negatives and, thus, a high tolerance for false positives. This information, however, was not communicated to the radiologists who used the AI.

The result was that the radiologists spent more time analyzing 40 AI-flagged x-rays than they did 100 non-flagged x-rays. They thought, the AI must have seen something that I’m missing, so I’ll keep looking. Had they been properly informed — had the design decision been made transparent to the end-user — the radiologists may have thought, I really don’t see anything here and I know the AI is overly sensitive, so I’m going to move on.

It distributes responsibility.

Executives need to decide whether a model is sufficiently trustworthy to deploy. Users need to decide how to use the product in which the model is embedded. Regulators need to decide whether a fine should be levied due to negligent design or use. Consumers need to decide whether they want to engage with the AI. None of these decisions can be made if people aren’t properly informed, which means that if something goes wrong, blame falls on the shoulders of those who withheld important information or undermined the sharing of information by others.

Advertisement

For example, an executive who approves use of the AI first needs to know, in broad terms, how the model was designed. That includes, for instance, how the training data was sourced, what objective function was chosen and why it was chosen, and how the model performs against relevant benchmarks. Executives and end users who are not given the knowledge they need to make informed decisions — including knowledge without which they don’t even know there are important questions they are not asking — cannot be reasonably held accountable.

Failure to communicate that information is, in some cases, a dereliction of duty. In other cases — particularly for more junior personnel — the fault lies not with the person who failed to communicate but with the person or people, especially leaders, who failed to create the conditions under which clear communication is possible. For instance, a product manager who wants to control all communication from their group to anyone outside the group may unintentionally constrain important communications because they serve as a communication bottleneck.

By being transparent from start to finish, genuine accountability can be distributed among all as they are given the knowledge they need to make responsible decisions.

It enables internal and external oversight.

AI models are built by a handful of data scientists and engineers, but the impacts of their creations can be enormous, both in terms of how it affects the bottom line and how it affects society as a whole. As with any other high-risk situation, oversight is needed both to catch errors made by the technologists and to spot potential problems that technologists may not have the training for, be they ethical, legal, or reputational risks. There are many decisions in the design and development process that simply should not be left (solely) in the hands of data scientists.

Oversight is impossible, however, if the creators of the models do not clearly communicate to internal and external stakeholders what decisions were made and the basis on which they were made. One of the largest banks in the world, for instance, was recently investigated by regulators for an allegedly discriminatory algorithm, which requires regulators to have insight into how the model was designed, developed, and deployed. Similarly, internal risk managers or boards cannot fulfill their function if both the product and the process that resulted in the product is opaque to them, thus increasing risk to the company and everyone affected by the AI.

Advertisement

It expresses respect for people.

The customers who used the reservation-taking AI felt they had been tricked. In other cases, AI can be used to manipulate or push people. For instance, AI plays a crucial role in the spread of disinformation, nudges, and filter bubbles.

Consider, for instance, a financial advisor who hides the existence of some investment opportunities and emphasizes the potential upsides of others because he gets a larger commission when he sells the latter. That’s bad for clients in at least two ways: first, it may be a bad investment, and second, it’s manipulative and fails to secure the informed consent of the client. Put differently, this advisor fails to sufficiently respect his clients right to determine for themselves what investment is right for them.

The more general point is that AI can undermine people’s autonomy — their ability to see the options available to them and to choose among them without undue influence or manipulation. To the extent that options are quietly pushed off the menu and other options are repeatedly promoted is, roughly, the extent to which people are pushed into boxes instead of given the ability to freely choose. The corollary is that transparency about whether an AI is being used, what’s it’s used for, and how it works expresses respect for people and their decision-making abilities.

What Good Communication Looks Like

Transparency is not an all-or-nothing proposition. Companies should find the right balance with regards to how transparent to be with which stakeholders. For instance, no organization wants to be transparent in a way that would compromise their intellectual property, and so some people should be told very little. Relatedly, it may make sense to be highly transparent in some circumstances because of severe risk; high-risk applications of AI may require going above and beyond standard levels of transparency, for instance.

Identifying all potential stakeholders — both internal and external — is a good place to start. Ask them what they need to know in order to do their job. A model risk manager in a bank, for instance, may need information related to the threshold of the model, while the Human Resources manager may need to know how the input variables are weighted in determining an “interview-worthy” score. Another stakeholder may not, strictly speaking, need the information to do their job but it would make it easier for them. That’s a good reason to share the information. However, if sharing that information also creates unnecessary risk of compromising IP, it may be best to withhold the information.

Advertisement

Knowing why someone needs an explanation can also reveal how high a priority transparency is for each stakeholder. For instance, some information will be nice to have but not, strictly speaking, necessary, and there may be various reasons for providing or withholding that additional information.

These kinds of decisions will ultimately need to be systematized in policy and procedure.

Once you know who needs what and why, there is then the issue of providing the right kinds of explanations. A chief information officer can understand technical explanations that, say, the chief executive officer might not, let alone a regulator or the average consumer. Communications should be tailored to their audiences, and these audiences are diverse in their technical know-how, educational level, and even in the languages they speak and read. It’s crucial, then, that AI product teams work with stakeholders to determine the clearest, most efficient, and easiest method of communication, down to the details of whether communication by email, Slack, in-person onboarding, or carrier pigeon is the most effective.

. . .

Implicit in our discussion has been a distinction between transparency and explainability. Explainable AI has to do with how the AI model transforms inputs into outputs; what are the rules? Why did this particular input lead to this particular output? Transparency is about everything that happens before and during the production and deployment of the model, whether or not the model has explainable outputs.

Explainable AI is or can be important for a variety of reasons that are distinct from what we’ve covered here. That said, much of what we’ve said also applies to explainable AI. After all, in some instances it will be important to communicate to various stakeholders not just what people have done to and with the AI model, but also how the AI model itself operates. Ultimately, both explainability and transparency are essential to building trust.

Advertisement

Read More

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published.

Business

Central African Republic President Reveals Crypto Hub Launch Date

Published

on

Central African Republic President Reveals Crypto Hub Launch Date

Faustin-Archange Touadéra – President of the Central African Republic (CAR) – has announced that his nation’s burgeoning crypto hub will launch on July 3rd. The initiative (also known as the “Sango” project) is intended to make CAR the most “progressive” economy in Africa through the use of blockchain technology.

The Genesis of Sango

President Toudéra revealed the news through a tweet on Monday, in which he reaffirmed his commitment to establishing Bitcoin as legal tender. “With Bitcoin as legal tender & inspiration, our country opens a new chapter in its inspiring journey towards a brighter future via blockchain tech,” he said.

CAR caught the world by surprise in April when the President signed a crypto legal framework into law, which also established Bitcoin as an official currency. This meant that the government would treat Bitcoin like the legacy CFA franc – exempt from the capital gains tax, and usable for paying one’s other tax obligations.

A month later, the President also announced the Sango project – a plan to turn CAR into a so-called “crypto hub” that attracts investors worldwide. Some of its sub-projects will include establishing a crypto national bank, creating a state-sponsored lightning wallet, and exempting crypto exchanges from taxes.

Advertisement

The project will also incorporate the “tokenization” of the country’s natural resources, according to a translation of today’s press release. More will be revealed on July 3rd at 7 pm CET during the Sango Genesis Event, which the president called the most “revolutionary” conference in the history of “blockchain technology” and “Web 3”.

Mimicking El Salvador

CAR’s Bitcoin adoption appears to closely follow El Salvador’s playbook. In September, the Central American country also established Bitcoin as legal tender, alongside its state-sponsored wallet “Chivo”.

Furthermore, El Salvador’s plans to build “Bitcoin City” are mirrored by CAR’s “crypto island” initiative – an ambitious project to create a unique investment location dedicated to crypto technology.

The global response to their initiatives has been similar too – which isn’t necessarily for the better. Like with El Salvador, the International Monetary Fund (IMF) has disapproved of the legal tender decision, citing “legal, transparency, and economic policy” challenges.

CAR’s authorities reportedly worked around both its regional Central Bank and the World Bank when it adopted Bitcoin. The latter confirmed that it will not support the Sango project with investments, though it did offer a $35 million loan to help “digitize” CAR’s public sector.

Advertisement
SPECIAL OFFER (Sponsored)

Binance Free $100 (Exclusive): Use this link to register and receive $100 free and 10% off fees on Binance Futures first month (terms).

PrimeXBT Special Offer: Use this link to register & enter POTATO50 code to receive up to $7,000 on your deposits.

Read More

Advertisement
Continue Reading

Business

Japan says hard to confirm impact from Russia’s debt default

Published

on

Japan says hard to confirm impact from Russia’s debt default

Japan says hard to confirm impact from Russia's debt default
© Reuters. FILE PHOTO: Japan’s new Finance Minister Shunichi Suzuki wearing a protective mask, amid the coronavirus disease (COVID-19) outbreak, speaks at a news conference in Tokyo, Japan, October 5, 2021. REUTERS/Kim Kyung-Hoon

TOKYO (Reuters) -Japanese Finance Minister Shunichi Suzuki said on Tuesday that it was “a little difficult” at present to confirm the definite impact on Japan from Russia’s debt default.

Suzuki, who commented on the issue after being asked about it by reporters at a news conference following a regular cabinet meeting, added that any moves in Russian government bonds were likely to have a limited impact on Japanese investors.

“The ratio of investments in Russia as part of Japan’s overall foreign bond investments is limited,” Suzuki said.

“Moves in Russian government bonds are likely to result in limited direct losses for Japanese investors, including financial institutions,” he said.

Advertisement

The White House and Moody’s (NYSE:) credit agency on Monday said Russia had defaulted on its international bonds for the first time in more than a century.

The Kremlin, which has the money to make payments thanks to oil and gas revenues, has rejected the claims that it has defaulted on its external debt.

Read More

Advertisement
Continue Reading

Business

UK’s Northern Ireland trade law clears first parliamentary hurdle

Published

on

UK’s Northern Ireland trade law clears first parliamentary hurdle

Investing.com - Financial Markets Worldwide

Please try another search

Economy 4 hours ago (Jun 27, 2022 09:30PM ET)

UK's Northern Ireland trade law clears first parliamentary hurdle
© Reuters. FILE PHOTO: A truck drives past a ‘money changed’ sign for euro, sterling and dollar currencies on the border between Northern Ireland and Ireland, in Jonesborough, Northern Ireland, May 19, 2022. REUTERS/Clodagh Kilcoyne

LONDON/DUBLIN (Reuters) -Legislation allowing Britain to scrap some of the rules on post-Brexit trade with Northern Ireland on Monday passed the first of many parliamentary tests, as Prime Minister Boris Johnson pressed on with plans that have angered the European Union.

Despite some fierce criticism, lawmakers voted 295 to 221 in favour of the Northern Ireland Protocol Bill, which would unilaterally overturn part of Britain’s divorce deal from the EU agreed in 2020. The bill now proceeds to line-by-line scrutiny.

Tensions with the EU have simmered for months after Britain accused Brussels of insisting on a heavy-handed approach to the movement of goods between Britain and Northern Ireland – checks needed to keep an open border with EU member Ireland.

Advertisement

Johnson has described the changes he is seeking as “relatively trivial” and ministers insist the move does not break international law, but the EU has started legal proceedings against Britain over its plans.

“While a negotiated outcome remains our preference – the EU must accept changes to the Protocol itself,” Foreign Secretary Liz Truss said on Twitter (NYSE:) after the vote.

Asked if the changes set out in the new bill could be implemented this year, Johnson told broadcasters: “Yes, I think we could do it very fast, parliament willing”.

Johnson’s predecessor, Theresa May, was one of several from his Conservative Party to criticise their leader.

“This bill is not, in my view, legal in international law, it will not achieve its aims and it will diminish the standing of the United Kingdom in the eyes of the world, and I cannot support it,” she said.

Advertisement

Ahead of the vote, Irish Foreign Minister Simon Coveney said the bill would not lead to a sustainable solution and would only add to uncertainty in Northern Ireland.

“I am hugely disappointed that the British government is continuing to pursue its unlawful unilateral approach on the Protocol on Northern Ireland,” he said in a statement.

Johnson has a majority to push the law through the House of Commons, though the vocal group of rebels will add to concerns about his authority following his survival in a confidence vote on June 6 and the embarrassing loss of two parliamentary seats on Friday.

The bill will face a bigger challenge when it eventually moves to the upper house, the unelected House of Lords, where the government doesn’t have a majority and many peers have expressed concern about it.

Related Articles

Read More

Advertisement

Continue Reading

Trending

Copyright © 2022 Newsline. Powered by WordPress.