top of page

​Mercator Senior Fellowship on AI Governance: mid-term insights

by Julia Reinhardt

February 7, 2023

​

Time flies! It’s been SIX MONTHS that I’m a Senior Mercator Fellow at AI Campus Berlin. So much has happened, and I’ve gained so many insights, and hopefully also been helpful to the community by achieving a good deal of my project goals.

 

TLDR: some links

 

​

A quick look back: On August 1, 2022, I started my 12-month Senior Fellowship, and for that purpose moved over from San Francisco to Berlin. At that point, I had spent 10 full years in San Francisco, first as a German diplomat doing outreach to the Western US, getting to know everyone from big tech firms to start-ups, politics, academia and civil society. A huge network that I extended further after leaving the diplomatic service and certifying as a privacy professional. I delved deep into data protection for small tech firms, and my specialty became to help them become compliant with (and understanding, which are two different but ideally overlapping things!) the EU’s new General Data Protection Regulation (GDPR). Explaining and implementing GDPR with small firms on the other side of the globe is by no way a no-brainer. But it became a big step of my journey into tech regulation and what it means to construct guardrails around our shiny new tools and the way they change our world.

When the EU announced plans for horizontal regulation of Artificial Intelligence, Mozilla Foundation funded my work 2020-21 with a Fellowship in Residence on the preparatory steps of the EU AI Act and how small AI companies in the US could implement AI governance that is impactful in avoiding harms and still technically compatible with what we want to achieve with AI. 

​

My current senior fellowship with German Stiftung Mercator 2022-23 is meant to lift that work to a new level, in direct collaboration with developers at AI Campus Berlin, a co-working space founded in 2021, and NGOs and foundations advocating for rules for AI that protect civil rights and mitigate AI-induced harm to society and individuals. The idea is to encourage new alliances and foster dialogue between silos that too rarely speak to each other, although my observation is that they do share stances on AI guidance, in particular with regards to the need to limit the dominance of Big Tech in AI development. Those who control the most data usually are in a position to build the best AI systems, and we don’t want these to be constantly the same.

 

A random sample of insights from activities of the past 6 months:

Overcoming silos and encouraging exchange between AI developers and civil society: The AI Campus has become my home base, and my goal is to make it an inviting place for a diversity of communities. I invite voices to Campus that are not usually heard here, that enrich work for the existing residents and make AI governance and trustworthiness a real part of the conversation. 

 

  • A big day for me was January 13, when we had the first Responsible Innovators Lunch on Campus, with Ryan Carrier, the CEO of the non-profit ForHumanity, as a guest speaker on the topic Auditing AI. I’ve known Ryan since 2020, when I was still in San Francisco, in the midst of the pandemic, trying to know more about how auditing AI can be done by non-profits. Our event this January was my first time to see him in 3-D, after so many online check-ins and discussions over the years. The community and professional advice he has pooled within ForHumanity is so impressive, and if anyone is still wondering how the most serious AI auditing I know of can be done by a non-profit: with the tireless help of 1049 volunteer contributors and 52 fellows from 79 countries, many of them experts in their field in academia and practice. 

  • Responsible Innovators is a new community that I wholeheartedly support. Berlin-based Jolanda Rose, who hails from the Legal Tech field (a good complement to my policy/privacy background), has the ambition to gather professionals from tech and surrounding fields eager to widen their horizon to the environmental, social, ethical, cultural and economic impact of innovation. Our first gathering at AI Campus is now being established as a regular meet-up, online and in-person, with deep dives from the field and ample opportunity to network.

  • And my biggest deal (so far) when it comes to bringing new people to AI Campus: For Chefrunde – the media executive circle, I partnered with Annette Milz to have 15 editors-in-chief of German media outlets on Campus, ranging from local print newspapers to digital outlets and national radio. They spent an immersive, high-level and comprehensive afternoon delving deep into the situation of AI development in Germany and Europe, the Campus’ objectives, the use of AI in journalism, biotech, car software, confidential computing, and infinite sectors beyond these, and the ethics and upcoming governance of it (my presentation can be found here). They super-charged their visit by attending a panel event on Natural Language Processing and Climate Tech (a much more urgent use of NLP than cheating on homework I believe!) that one of the Campus’ startups hosted. Chefrunde is a format I’ve been part of since my first days in San Francisco, and in the past, I’ve helped Annette have thought leaders like Kara Swisher or Ken Doctor discuss their perspectives with German media executives; off the record, always, but with deep impact on key multipliers and opinion formers. At this year’s edition at AI Campus Berlin, ChatGPT’s had captured everyone’s attention, but the focus also moved to questions around investments in Europe and tech sovereignty. It is important that we ask ourselves, and our media institutions, why so much of the NLP hype isn’t well translated and explained by the media so the general public can have more of a realistic impression of the capabilities and use cases of AI and feels confident to raise their voice about what guardrails we want and need.

 

Journalism and AI is a particularly interesting pairing for everyone pondering the state of democracy and the future of the profession. In the past weeks I delved deeper into the promises and obstacles of using AI to find relevant news, analyze patterns, personalize content, and filter tons of data to write articles, and will share some thoughts and sources in a separate post.

 

Small AI Companies and the European AI Act: Negotiations in Brussels on the first horizontal regulation of artificial intelligence have been proceeding, and after a common position was reached in the Council of Ministers in December 2022, all eyes are now on the European Parliament. 

 

  • After the Commission’s powerful serve in 2021, the draft regulation, I was disappointed by the Council’s input and especially of the German government’s reaction: a hodge-podge of statements, no coherence with the coalition treaty that binds the three parties forming the German government since late 2021 (namely that biometric identification in publicly accessible spaces must be ruled out by an EU-wide legislation), and no tangible commitment to the important overall goal of the AI Act: “to enjoy the benefits of AI while feeling safe and protected”. From my years in European Coordination at the Federal Foreign Office (2008-11), I know first-hand what a “German vote” is (namely the German governments' repeated failures to achieve a coordinated position and their consequent abstentions in COREPER or the Council) – but it’s ironic that this would happen to the one piece of EU legislation I’ve been focused on for years now. Fortunately, I’m more interested in the AI Act’s future impact and not the member state power game, so I’m glad a good part of the Commission draft survived the Council. It will be crucial now that arguments of civil society for a coherent protection of human rights, as well as arguments of small AI companies for an easily implementable framework for industry, with as few loopholes for Big Tech as possible, remain at the center of parliamentary deliberations. A plenary vote on the EP position could happen in the first half of 2023. 

  • To make a stance for my deep conviction that AI regulation can foster innovation and that there is no dichotomy between both, I wrote a blog post with the co-founder and CEO of an AI startup I respect a lot: Maarten Stolk at Deeploy. Based in the Netherlands, Deeploy offers a technical solution to AI Act compliance, and given that early on, years ago, I looked into OneTrust’s efforts of making GDPR compliance easy (if sometimes a bit too one-size-fits-all), it is only logical that I’m interested in startups looking to making AI regulation accessible and an interesting business case. Maarten is a wonderful co-author for me since he is deeply involved in the struggles of getting AI to production first-hand, and that nicely complements my experience in governance and policy. Our stance is informed by both his entrepreneurial and data science expertise and my bird’s eye view on where we go as a society with AI. I hope the article finds your interest and you also don’t spare me from feedback (best on my website)!

  • I mentioned in a previous post that I joined the German AI Association as an expert back in August. Since Brussels is busy negotiating those pieces of legislation that will be fundamental to the AI ecosystem in Europe, it’s been exciting to provide input and become a member of the association’s Steering Committee on EU Regulation. Through the Large European AI Models (LEAM) project, the AI Association is currently busy pushing for the establishment of an AI supercomputing infrastructure based in Europe (since 73% of foundation models worldwide are developed in the US, 15% in China). With claims that LEAM will help develop “trustworthy open-source foundation models following European ethical standards”, it will be crucial to practically define what these standards are and how exactly they can be proven to European citizens in ways of transparency, and consequently create trust. 

  • Increasingly, large consulting firms are getting a foot into this sector, sensing a growing market. But nothing is better than to create strong alliances with NGOs and include the voice of civil society in the process. Berlin is host to a number of civil society organizations in the field of tech impact on society and human rights in the digital sphere. I’m making progress getting to know most of them to be able to fully grasp the range of knowhow on they can lend to developers in the AI industry. With the European AI Act being finalized, we need their expertise in sectors where the unregulated use of AI is harming individuals and society and contradicts our norms. This is a field at the core of my work, and I will share more about it during the second half of my fellowship.

​

Stiftung Mercator: Since I started by describing some of my activities with my host, the AI Campus, it’s fitting to dedicate the last part to my funder’s activities I’m involved in. 

 

  • Continuing that narrative of new, or unusual, alliances to regulate AI, I am glad that Stiftung Mercator is on the same page with me and we’re making efforts to gather other stakeholders around that aim. Carla Hustedt posted about this a few weeks ago, and as she mentions there, she’s counting (not only) on me: allies are needed, and fast! Bertelsmann Foundation is pushingin a similar direction, and the louder civil society, with support from philanthropy, can be on this, the better.

  • More personally, 2023 started nicely for me with Mercator Foundation finding it interesting enough to ask me for my outlook on 2023, published in their magazine Aufruhr. I took the opportunity to give a sense of what lies ahead in terms of EU regulation of AI and why new alliances are necessary between small AI companies and civil society (see above). Aufruhr also wants to follow up with a more in-depth portrait of me and my work, which I am looking forward to reading (although it will be weird)!

  • I thoroughly enjoy being part of the Roundtable on Ethical AI Development, a Mercator project that is consciously more focused on bigger AI companies than what I do, and adeptly run by Gesellschaft für Informatik. I can contribute insights from smaller companies’ struggles with the practical implementation of ethical guidelines and lessons learned from my conversations with their developing teams.

  • It’s exciting to follow and observe progress in other projects the Digital Society team at Stiftung Mercator runs. The biggest being the Agora Digitale Transformation, a new non-profit think tank that aims to contribute to a capable, innovative state that effectively integrates research, business and society, and provide urgently needed impetus to digital policy with scientifically sound yet practical analyses. Executive director Stefan Heumann is making huge leaps putting together an agenda and team! I’ve gained precious insights and new contacts from attending a roundtable on the upcoming implementation of the Digital Services Act (yes, we won’t be able to escape the nitty-gritty of it!), and a grantee retreat hosted together with the US-American Ford Foundation. Sophie Pornschlegel at European Policy Center/Connecting Europe and I are developing plans to connect our conversations on AI governance in Berlin with those in Brussels. And the universities of Bonn and Cambridge partner on a super-interesting research project on Desirable Digitalisation, and participant academics are poised to offer top-notch insights into “How can we meaningfully assess whether and how AI systems violate fundamental rights and values?”, “How does AI development impact the environment and how can we foster truly sustainable approaches to technology production?” and “What does desirable technology development look like? And who gets to determine what counts as ‘desirable’?” I wish I already knew their answers to these questions, that would help me out a great deal! 

​

Finally, for whoever has managed to read till this point (thank you!): I’m happy to report that I’ve been named one of 100 “Brilliant Women in AI Ethics 2023”! The list is so impressive that I’ve been busy studying everyone else’s profiles on there, also of previous years, and it’s a great resource. If ever you were wondering who to consult for your questions around AI ethics, look no further than this (still too small and rarely recognized) crowd of women.

​

And now, off to my next 6 months of Senior Mercator Fellowship!

bottom of page