Navigating the AI landscape: A conversation with World Economic Forum’s Cathy Li – ET Edge Insights – ET Edge Insights

11 minutes, 58 seconds Read

From short-term risks to long-term impacts, WEF’s Cathy Li discusses AI governance.

Meet Cathy Li, Head, AI, Data and Metaverse; Member of the Executive Committee, World Economic Forum, a driving force in AI governance.


World’s Leading High-rise Marketplace

The World Economic Forum established the AI Governance Alliance (AIGA) that is focused on leading the charge in ensuring that the exciting advancements in AI, like Generative AI, are met with responsible and safe practices. The initiatives focus on addressing short-term risks, like cyber threats, while also planning for the long-term impacts of advanced AI. They’re navigating the debates around open source versus closed source models, advocating for innovation while safeguarding against misuse.

For countries like India, Cathy sees immense opportunities in AI adoption, but she also urges caution regarding privacy, security, and equal access to AI tools, especially in local languages.

Cathy emphasizes the importance of a coordinated global approach to AI regulation and governance.

The aim is to harness the potential of AI for transformative applications like addressing climate change and enhancing healthcare and education. Cathy’s call to action? Join the global community in shaping the future of AI responsibly and collaboratively.


Q: Generative AI has sparked much excitement since its release in 2022, revolutionizing how we think about artificial intelligence. The World Economic Forum established the AI Governance Alliance (AIGA). Can you list down the initiatives being undertaken to address the rapid adoption of Generative AI and ensure its responsible design, development, and deployment?

Absolutely! Generative AI, like ChatGPT, has captured everyone’s imagination. The tool is so intuitive to use and there’s no adoption barrier.

Over the past 13-14 months, there’s been a huge increase in AI investments. This has really breathed new life into the industry, which is super encouraging.

However, while it’s nice that so many companies are embracing AI, there’s also a need for caution. AI is being adopted quickly, faster than any other tech before it. So, companies are being careful about how they use it. It’s not just the bosses deciding – lots of regular employees are using AI tools in their everyday work.

So, what are we doing about it? Well, we started something called the AI Governance Alliance where people from all kinds of organizations come together to figure out how to use AI responsibly. We’ve been building this community for a while now because we believe in teamwork.

We’ve got all sorts of folks involved – from big companies to government departments. We’ve got experts in technology and people who know all about making business strategies. We’re all working together to make sure AI is used in the best possible way.

Our main goal is to make sure AI, especially Generative AI, is developed and used responsibly. We’re looking at the whole AI world, but we’re especially focused on Generative AI because it’s such a big deal right now.

We’re doing this by looking at three main areas: making sure AI is safe to use, figuring out how to use it responsibly in different industries, and making sure there are rules and regulations in place to keep everything in check.

We’re excited about the progress we’ve made so far. We’ve got over 200 organizations and 150 individuals on board, and we’ve just started sharing our ideas, and publishing our findings with everyone. world. It’s going to be an exciting year ahead!

Q. Many business and tech leaders talk about the benefits of AI but also warn about its risks. Can you explain why some people are concerned about AI?

When we look at the history of technology, like the printing press or the internet, there’s always excitement about the good things they bring. But there’s also worry because technology can be used for both good and bad. This fear isn’t new; whenever something big happens, people get concerned about how it affects their lives. However, history shows us that we’re pretty good at handling these challenges. We invent the tools, so we know how to make them safe. Compared to when social media started, we’re now more aware of the risks. So, while there are concerns about AI, we’re better prepared to deal with the downsides.

Q. What are the key debates currently ongoing in AI governance?
Right now, there are two big debates in AI governance. One is about the risks we face in the short term versus the long term. Short-term risks come from advanced AI, which can be used by cybercriminals to attack digital systems. This can lead to problems like deep fakes and privacy issues. On the other hand, in the long term, we worry about not fully understanding how this powerful technology works and the risks it might bring. However, experts also believe that advanced AI could help us solve big problems like climate change.

The second debate is about open sour Open Source (OS) versus closed source models. Open source allows more people to use and improve AI, making it safer in the long run. But there’s a risk of bad actors getting hold of it. Closed source models, while more controlled, could concentrate too much power in the hands of a few.

Q. Is this the reason that the European Union (EU) recently exempted OS models from certain regulations?

Yeah, the EU order is also like the US executive order. Right now, open source isn’t included in the regulations. We want to protect the technology and how it’s used, but we also want to keep encouraging new ideas. There are still lots of things we’re figuring out about open source.

That’s why, for now, open source doesn’t have to follow all the big rules. Maybe that’ll change later when the technology gets even better, but that’s the plan for now.

Q. How does good AI fight against bad AI, especially considering the increasing cyber threats and attacks we’ve been witnessing? What strategies can be employed to tackle these challenges effectively?

So, let’s go back to what I mentioned earlier. AI can be used in two ways – to enhance attacks, and to defend against them.

Firstly, it’s essential to continuously learn and adapt. AI-driven cybersecurity systems must be able to learn from new threats in real-time, using advanced machine learning techniques. This helps them stay ahead of evolving cyber threats.

Secondly, fostering a culture of innovation is crucial. Security leaders should encourage ongoing research and development to anticipate and counter emerging risks. This means investing in cutting-edge technologies and methodologies.

Collaboration also plays a vital role. By partnering with other organizations and sharing knowledge, cybersecurity systems can gain a broader understanding of potential threats and become more effective.

Lastly, implementing proactive and predictive security measures is important. Instead of reacting to threats after they happen, systems should be able to anticipate and prevent them before they materialize.

Q. You mentioned that countries like India have ‘sea of opportunities with AI. What opportunities do you see, and what challenges should we be cautious about?

AI is everywhere, and it doesn’t recognize borders. Generative AI can be a game-changer. India has advanced digital infrastructure and a young, tech-savvy population. This puts India in a good position to benefit from AI.

Flexible regulations, like those seen in India regarding privacy and laws, also indicate a conducive environment for AI adoption.

At the same time, we need to ensure equal access to AI models and data. Developing countries shouldn’t fall behind in adopting generative AI. Access to AI models in local languages is crucial because most powerful models are built on English, Chinese, and Spanish languages. India should be able to build its own language models given its linguistic diversity.

However, there are risks like privacy, security, and safety. Additionally, ensuring equal access to AI tools, especially in local languages, is crucial. Technology should reflect local culture and social values.

Q. What will be your call to action for our readers and what will be AIGA’s focus this year?

I invite any companies or governments interested in learning about Generative AI on a global scale and understanding what others are doing in this field to join us. It’s important for us to have a coordinated global approach w.r.t. AI regulation and governance.

This year, our focus will be on access points. The most exciting thing about this technology is not just its ability to increase productivity but also its potential for moonshot applications. These include addressing issues like climate change, making scientific discoveries, improving healthcare, and enhancing education. We cannot achieve these goals alone; we need the entire global community to come together and collaborate. This is my call to action.

Disclaimer: The views expressed in this article are those of the author and do not necessarily reflect the views of ET Edge Insights, its management, or its members

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts