Telecoms & IT

Incapable of Error

The Risky Business of AI

Will regulation catch up with the advances being made in Artificial Intelligence?

A visitor takes a picture of a display demonstrating crowd surveillance at the stall of the artificial intelligence and facial recognition technology company Sensetime at the Security China 2018 exhibition on public safety and security in Beijing, China October 23, 2018. Picture taken October 23, 2018. REUTERS/Thomas Peter

Artificial technology arguably presents perils both to shareholder value and the future of civilization, pitfalls that most contractors do not discuss when bidding on projects that use the fast emerging technology.

Across a range of sectors, there is a rush to adopt or adapt to new AI technology, but the instruction manuals on these new systems do not come with their own philosophy on how to use them.

It is difficult too, for contractors and clients in public-private partnerships, to fully understand the consequences of the technology they are putting in place.

And it is not just civil servants or politicians who should be worried. AI systems could create exorbitant tort liabilities and antitrust prosecution risks because their algorithms lack the common sense their employees have.

A recent study showed that machine learning systems tasked with setting product prices discovered on their own that colluding with competitors to fix prices boosted profits.

Incorporating AI into any venture comes with unpredictable externalities and liabilities that decision makers must take into account before jumping on expensive, and possibly wasteful, AI projects.

Such projects could also present bigger challenges in the future, including backlash from consumers over the protection of their personal data, or the fear that people could lose faith in the fairness of AI technology as it applies to their lives.

Certain countries are taking the lead on adopting AI technology, and some businesses have made it a part of their logistical infrastructure and offerings to clients.

In 2017, the UAE has appointed a Minister for Artificial Intelligence, who will oversee the forecasted adoption of AI-enabled technologies into the daily life in the UAE, in particular the commercial center of Dubai.

According to an official release, the Emirates hopes to wire up traffic and security monitors, analyze water systems, or determine when and how to coordinate security services more efficiently and effectively with the help of AI systems.

“AI is not negative or positive. It’s in between. The future is not going to be a black or white. As with every technology on Earth, it really depends on how we use it and how we implement it,” HE Omar Bin Sultan Al Olama told Futurism in 2017.

Input from human beings is not just how AI functions, it is also key to understanding how to use it in the right way.

“People need to be part of the discussion. It’s not one of those things that just a select group of people need to discuss and focus on.”

Whatever the applications of AI, no technology can ever solve every problem facing decision makers in business or government.

Indeed, new technology can create new problems.

It is also important to remember the fundamental limitations of the technology available. AI does not think really, but rather uses calculations that simulate the process of learning.

Unlike a living creature, a machine learning algorithm does not “worry” about a mistake costing money or even lives. That is the a problem for you, the user.

“There is lots of conversation about what happens when, and if, we develop general or ‘conscious’ AI, but the near-term problem is absolutely in how humans will use it. AI outcomes at this point are simply a reflection of the inputs and goals we give them,” Sam Weston, a specialist in AI, told TBY in recent correspondence. Weston helped start AI Policy Labs, a non-profit advising government officials on the potential of the technology.

Silicon Valley bubbles with talk of “disrupting” some market or other with a mobile app or social media platform, but AI technology could disrupt more than just how we find a cab or a date.

These systems work by connecting human beings, but AI introduces an alien element into the Information Age.

More worrisome, perhaps, than the technology becoming “self-aware,” is the possibility for its misuse by humans, something some technologists fear is happening.

This is not just science fiction, it is real.

AI technology is neither conscious, nor self-aware, nor in need of sleep or food, but machine learning programs can produce an original piece of music, or even a poem. They can drive a car, but don’t fear death or injury in a crash or understand the concept of “mortality.”

Unlike a real employee, machines also are incapable of lying on their own or deciding to drink the last drop of office coffee without making a new pot, as human employees may do. It is immune to a scalding email. It does not know “right” from “wrong.”
Nevertheless, AI has become more popular as the world’s immeasurably vast and growing store of data has become harder and harder to manage for businesses and governments alike. This information can be meaningless or highly personal. Sifting through trillions of bits of information is most easily achieved using machine learning technology, the current stage of “Artificial Intelligence” available on the market.

McDonalds’ recently purchased an Israeli machine learning startup called Dynamic Yield for USD300 million, and is using the technology to keep track of all its customers’ data and its vast logistics network, in the hopes of getting the delivery of fast food down to a science instead of a flawed human art form.

But some functions of government and the marketplace are potentially impossible to pass over to a machine, such as the personal, face-to-face relationships that underpin both arenas of power and influence.

Contractors may want to overstate the capabilities of AI, but clients should remain skeptical about unproven claims, and demand evidence from AI firms of previous success. They must also consider the long-term consequences of removing humans from any stage of business operations or civil functions.

“Any business with large first party data sets, and any business whose business is tied to efficiency. I would also include advertising and communications in here, which is often overlooked,” Weston added. “There are specific questions about how authoritarian governments may use this technology to increase their control over discourse/dissent and public movement that deserve a lot of focus.”

Machine learning code works to vastly enhance the capabilities of people, just every other technological innovation. Even figuring out how to use fire ethically has been a struggle for people since its invention. AI presents a similar challenge as fire, for businesses, governments and humanity as a whole. And it’s not going anywhere. The worst thing for governments and businesses to do is to treat AI as a “fad,” Weston added.

“Public sector use will likely reach more people, more quickly, and in ways that are more likely to have physical safety implications but the pitfalls, especially the need to factor in ethical considerations at the start of projects,” Weston added.

Indeed, both governments and the private sector have linked roles to play to make sure a public-private partnership (PPP) works out well for client and contractor. Governments need to make sure access to public data is simple and transparent, while businesses need to prioritize the public good over profit, “removing red tape while also working on developing new kinds of rules for oversight, safety, and ethical compliance.”

AI-enabled traffic systems can interpret data quickly pulling from thousands of sensors across a city, and then make a call that affects human lives, whether they are stuck in traffic or a victim of crime, better than a human can. Shipping or transportation sectors can use AI to make optimal decisions on package routing, and even use advanced robotics to replace workers, as happened in the automobile manufacturing industry. Today, even traditionally white collar careers could be in jeopardy, as their rout chores of them gets chipped away by even more sophisticated forms of automation.

There are also cultural differences in the how AI gets deployed, and what it does. Japan was an early adopter of robotics technology, while it still lags behind in the US, although both are advanced economies. Each country, and each market will react differently to the introduction of this technology, presenting another challenge for AI firms hoping to take their business global in the face of complex and diverse regulatory environment.

In the long term, governments and businesses must start thinking about forging international standards and practices for AI, and begin establishing “best practices.”

Such standards are not without precedent, as city managers and CEOs seek to meet international standards for water quality or food safety. There is a market failure in protecting the public, and investors, from malicious use of AI or its faulty manufacture. Standardization and international regulation can help keep all involved safe.

Sector-wide certification for AI standards, both in terms of safety and ethics, would be able to manage the challenges it presents, which are constantly evolving. Sectors that want to use AI should work toward the internationalization of AI standards for data privacy and ethical practices. If doctors can obtain international certifications, then surely companies responsible for road traffic management need to live up to similar requirements.

Standards for airline safety apply across the sector internationally. For example, the International Civil Aviation Organization, a branch of the UN, certifies international pilots and air traffic control operators to speak enough English to communicate. That consistency has helped airline companies expand to be global corporations, and has mostly guaranteed that airplane passengers have a certain level of safety when traveling. Airline stocks plummet after crashes, as happened recently to Boeing. If AI technology fails to protect the public, then the entire sector and its clients can expect greater resistance from the public to the assimilation of AI.

International commercial standards would not restrain all abuses of AI, as states remain sovereign on Earth and leaderless, malicious bands of hackers declare their sovereignty online, alongside the world’s “cyberwarriors.” The US and China are still locked in a battle over the development of AI technology, part of a long-running cyber-conflict between the two countries. But just because a few governments are signing off on weaponizing AI does not mean all governments need to. Nor does it mean that executives can pretend to not know the difference between right and wrong because “the AI told us to…”

Somewhere, a human has to be responsible.

“Disruptive” technologies have revolutionized how we live in the 21st century, smartphones and social media, in particular. The companies selling these technologies have made hundreds of billions of dollars. But that success has come with scrutiny. Social media companies are under greater regulatory pressure than ever, and in more ways, because they did not consider the long term consequences of what connecting everyone would do to society or politics. Publicly traded social media platforms have changed society so drastically and so quickly that 20th century, analog modes of politics now seem obsolete. Their shareholders would rather all this regulatory scrutiny and unanswerable questions would go away. A similar fate could await the AI companies of the future, and the governments that hired them.

The Utopian hope behind the Internet was one that suggested it could make the world a better place, to narrow society’s rifts thanks to the magic of telecommunications. And in countless cases, it has. Nevertheless, the Internet is only as good or bad as its users are to each other. The same goes for the power of AI, and with that power comes great responsibility.

You may also be interested in...

Diplomacy

China-Russia Relations

How has the war on Ukraine changed diplomacy?

View More

Energy & Mining

Top 4: Asian Solar Energy

China/Japan/South Korea/India

View More

Economy

Top 4: Rare Materials

Critical Resource Supply Limits

View More

Health & Education

3 Coronavirus Effects

COVID-19 and the Economy

View More

Economy

5 Trade Wars in History

Part one of a two-part series

View More

Economy

Trade War Thawing?

Beijing/D.C. Approach Agreement

View More

Industry

Death of the Assembly Line

On-Demand Manufacturing

View More

Green Economy

Green China?

Renewable Energy Challenges in 2019

View More
View All Articles