Synthetic intelligence is on all people’s lips today, sparking pleasure, concern and countless debates. Is it a power for good or dangerous – or a power we even have but to completely perceive? We sat down with outstanding pc scientist and AI researcher Mária Bieliková to debate these and different urgent points surrounding AI, its influence on humanity, and broader moral dilemmas and questions of belief it raises.
Congratulations on changing into the most recent laureate of the ESET Science Award. How does it really feel to win the award?
I really feel immense gratitude and happiness. Receiving the award from Emmanuelle Charpentier herself was an unbelievable expertise, stuffed with intense feelings. This award does not simply belong to me – it belongs to all of the outstanding individuals who accompanied me on this journey. I imagine they have been all equally thrilled. In IT and likewise in applied sciences typically, outcomes are achieved by groups, not people.
I am delighted that that is the primary time the principle class of the award has gone to the sector of IT and AI. 2024 was additionally the primary 12 months the Nobel Prize was awarded for progress in AI. In reality, there have been 4 Nobel Prizes for AI-related innovations – two in Physics for Machine Studying of Neural Networks and two in Chemistry for coaching Deep Neural Networks that predict protein constructions.
And naturally, I really feel immense pleasure for the Kempelen Institute of Clever Applied sciences, which was established 4 years in the past and now holds a steady place within the AI ecosystem of Central Europe.
A number one Slovak pc scientist, Mária Bieliková has carried out intensive analysis in human-computer interplay evaluation, consumer modelling and personalization. Her work additionally extends to the information evaluation and modelling of delinquent habits on the internet, and he or she’s a outstanding voice within the public discourse about reliable AI, the unfold of disinformation, and the way AI can be utilized to fight the problem. She additionally co-founded and presently heads up the Kempelen Institute of Clever Applied sciences (KInIT), the place ESET acts as a mentor and associate. Ms. Bieliková just lately received the Excellent Scientist in Slovakia class of the ESET Science Award.
Writer and historian Yuval Noah Harari has made the pithy statement that for the primary time in human historical past, nobody is aware of what the world will appear like in 20 years or what to show in colleges as we speak. As somebody deeply concerned in AI analysis, how do you envision the world twenty years from now, notably when it comes to know-how and AI? What are the abilities and competencies that can as soon as be important for as we speak’s kids?
The world has at all times been tough, unsure, and ambiguous. As we speak, know-how accelerates these challenges in ways in which folks wrestle to handle in actual time, making it laborious to foresee the results. AI not solely helps us automate our actions and change people in numerous fields, but additionally create new constructions and artificial organisms, which may probably trigger new pandemics.
Even when we didn’t anticipate such eventualities, know-how is consciously or unconsciously used to divide teams and societies. It is now not simply digital viruses aiming to paralyze infrastructure or acquire sources; it is a direct manipulation of human pondering by way of propaganda unfold on the velocity of sunshine and magnitude we could not have imagined a couple of many years in the past.
I don’t know what sort of society we’ll reside in 20 years from now or how the principles of humanity will change. It’d take longer, however we’d even be capable to alter our meritocratic system, presently based mostly on the analysis of information, in a manner that doesn’t divide society. Maybe we’ll change the way in which we deal with knowledge as soon as we understand we will not totally belief our senses.
I’m satisfied that even our kids will more and more deviate from the necessity for information and evaluating success in numerous assessments, together with IQ assessments. Information will stay essential, however it should be information that we are able to apply. What is going to actually matter is the power persons are prepared to put money into doing significant issues. That is true as we speak, however we regularly underutilize this angle when discussing schooling. We nonetheless consider cognitive expertise and information regardless of figuring out these competencies alone are inadequate in the true world as we speak.
I imagine that as know-how advances, our want for sturdy communities and the event of social and emotional expertise will solely develop.
As AI continues to advance, it challenges long-standing philosophical concepts about what it means to be human. Do you assume René Descartes’ statement about human exceptionalism, “I feel, due to this fact I’m”, will must be re-evaluated in an period the place machines can “assume”? How far do you imagine we’re from AI methods that may push us to redefine human consciousness and intelligence?
AI methods, particularly the big basis fashions, are revolutionizing the way in which AI is utilized in society. They’re regularly bettering. Earlier than the tip of 2024, OpenAI introduced new fashions, O3 and O3mini, which achieved vital developments in all assessments, together with the ARC-AGI benchmark that measures AI’s effectivity in buying expertise for unknown duties.
From this, one would possibly assume that we’re near reaching Synthetic Normal Intelligence (AGI). Personally, I imagine we aren’t fairly there with present know-how. We now have wonderful methods that may help in programming sure duties, reply quite a few questions, and in lots of assessments, they carry out higher than people. Nonetheless, they don’t actually perceive what they’re doing. Due to this fact, we can not but speak about real pondering, though some reasoning behind job decision is already being completed by machines.
Simply as we perceive phrases like intelligence and consciousness as we speak, we are able to say that AI possesses a sure degree of intelligence – that means it has the flexibility to unravel complicated issues. Nonetheless, as of now, it lacks consciousness. Primarily based on the way it capabilities, AI doesn’t have the aptitude to really feel and use feelings within the duties it’s given. Whether or not this may ever change, or if our understanding of those ideas will evolve, is tough to foretell.

The notion that “to create is human” is being more and more questioned as AI methods develop into able to producing artwork, music, and literature. In your view, how does the rise of generative AI influence the human expertise of creativity? Does it improve or diminish our sense of id and uniqueness as creators?
As we speak, we witness many debates on creativity and AI. Individuals devise numerous assessments to showcase how far AI has come and the place these AI methods or fashions surpass human capabilities. AI can generate pictures, music, and literature, a few of which might be thought of inventive, however definitely not in the identical manner as human creativity.
AI methods can and do create authentic artifacts. Though they generate them from pre-existing supplies, we may nonetheless discover some actually new creations. However that is not the one essential facet. Why do folks create artwork, and why do folks watch, learn, and take heed to artwork? At its essence, artwork helps folks discover and strengthen relationships with each other.
Artwork is an inseparable a part of our lives; with out it, our society could be very totally different. Because of this we are able to recognize AI-generated music or work – AI was created by people. Nonetheless, I don’t imagine AI-generated artwork would fulfill us long-term to the identical extent as actual artwork created by people, or by people with the help of know-how.
Simply as we develop applied sciences, we additionally search causes to reside and to reside meaningfully. We’d reside in a meritocracy the place we attempt to measure every thing, however what brings us nearer collectively and characterizes us are tales. Sure, we may generate these too, however I’m speaking concerning the tales that we reside.
AI analysis has seen fluctuations in progress over the many years, however the latest tempo of development – particularly in machine studying and generative AI – has stunned even many specialists. How briskly is just too quick? Do you assume this speedy progress is sustainable and even fascinating? Ought to we decelerate AI innovation to higher perceive its societal impacts, or does slowing down threat stifling useful breakthroughs?
The velocity at which new fashions are rising and bettering is unprecedented. That is largely as a result of manner our world capabilities as we speak – a large focus of wealth in personal corporations and sure components of the world, in addition to a world race in a number of fields. AI is a major a part of these races.
To some extent, progress will depend on the exhaustion of as we speak’s know-how and the event of latest approaches. How a lot can we enhance present fashions with identified strategies? To what extent will huge corporations share new approaches? Given the excessive price of coaching massive fashions, will we simply be observers of bettering black bins?
At current, there is no such thing as a stability between the methods humanity can create and our understanding of their results on our lives. Slowing down, given how our society works, isn’t potential, in my view, with no paradigm shift.
Because of this it’s essential to allocate sources and power to analysis the results of those methods and to check the fashions themselves, not simply by way of standardized assessments as their creators do. For instance, on the Kempelen Institute, we analysis the expertise and willingness of fashions to generate disinformation. Lately, we’ve additionally been wanting into the technology of personalised disinformation.
There’s a whole lot of pleasure round AI’s potential to unravel international challenges – from healthcare to local weather change. The place do you imagine the promise of AI is best when it comes to sensible and moral purposes? Can AI be the “technological repair” for a few of humanity’s most urgent points, or can we threat overestimating its capabilities?
AI will help us sort out probably the most urgent points whereas concurrently creating new ones. The world is stuffed with paradoxes, and with AI, we see this at each flip. AI has been useful in numerous fields. Healthcare is one such space the place, with out AI, some progress – for instance, in growing new medicines – wouldn’t be potential, or we must wait for much longer. AlphaFold, which predicts the construction of proteins, has huge potential and has been used for years now.
Alternatively, AI additionally allows the creation of artificial organisms, which may be useful but additionally pose dangers similar to pandemics or different unexpected conditions.
AI assists in spreading disinformation and manipulating folks’s ideas on points like local weather change, whereas on the identical time, it could possibly assist folks perceive that local weather change is actual. AI fashions can exhibit the potential penalties for our planet if we proceed on our present path. That is essential, as folks are likely to focus solely on short-term challenges and sometimes underestimate the seriousness of the state of affairs except it immediately impacts them.
Nonetheless, AI can solely assist us to the extent that we, as people, permit it to. That is the largest problem. Since AI does not perceive what it produces, it has no intentions. However folks do.

With nice potential additionally come vital dangers. Outstanding figures in tech and AI have expressed considerations about AI changing into an existential menace to humanity. How do you assume we are able to stability accountable AI improvement with the necessity to push boundaries, all whereas avoiding alarmism?
As I discussed earlier than, the paradoxes we witness with AI are immense, elevating questions for which we’ve no solutions. They pose vital dangers. It is fascinating to discover the probabilities and limits of know-how, however then again, we aren’t prepared – as people, nor as a society – for this kind of automation of our expertise.
We have to make investments at the very least as a lot in researching the technological influence on folks, their pondering, and their functioning as we do within the applied sciences themselves. We want multidisciplinary groups to collectively discover the probabilities of know-how and their influence on humanity.
It is as if we have been making a product with out caring concerning the worth it brings to the buyer, who should purchase it, and why. If we didn’t have a vendor, we would not promote a lot. The state of affairs with AI is extra severe, although. We now have use circumstances, merchandise, and individuals who need them, however as a society, we don’t totally perceive what’s occurring after we use them. And maybe most individuals do not even need to know.
In as we speak’s international world, we can not cease progress, nor can we sluggish it down. It solely slows after we are saturated with outcomes and discover it laborious to enhance, or after we run out of sources, as coaching massive AI fashions could be very costly. That’s the reason their greatest safety is researching their influence from the start of their improvement and creating boundaries for his or her use. Everyone knows that it’s prohibited to drink alcohol earlier than the age of 18, or 21 in some nations, but typically with out hesitation, we permit kids to talk with AI methods, which they will simply liken to people and belief implicitly with out understanding the content material.
Belief in AI is a serious matter globally, with attitudes towards AI methods various extensively between cultures and areas. How can the AI analysis neighborhood assist foster belief in AI applied sciences and be certain that they’re seen as useful and reliable throughout numerous societies?
As I used to be saying, multidisciplinary analysis is important not just for discovering new potentialities and bettering AI applied sciences but additionally for evaluating their expertise, how we understand them, and their influence on people and society.
The rise of deep neural networks is altering the scientific strategies of AI and IT. We now have synthetic methods the place the core rules are identified, however by way of scaling, they will develop expertise that we can not at all times clarify. As scientists and engineers, we devise methods to make sure the mandatory accuracy in particular conditions by combining numerous processes. Nonetheless, there may be nonetheless a lot we do not perceive, and we can not totally consider the properties of those fashions.
Such analysis doesn’t produce direct worth, which makes it difficult to garner voluntary help from the personal sector on a bigger scale. That is the place the personal and public sectors can collaborate for the way forward for all of us.
AI regulation has struggled to maintain up with the sector’s speedy developments, and but, as somebody who advocates for AI ethics and transparency, you’ve probably thought of the function of regulation in shaping the long run. How do you see AI researchers contributing to insurance policies and laws that guarantee the moral and accountable improvement of AI methods? Ought to they play a extra energetic function in policymaking?
Enthusiastic about ethics in analysis is essential, not solely in analysis but additionally within the improvement of merchandise. Nonetheless, it may be fairly costly as a result of it is vital that an actual want arises on the degree of crucial plenty. We nonetheless have to think about the dilemma of latest information acquisition versus the potential interference with the autonomy or privateness of people.
I’m satisfied {that a} good decision is feasible. The query of ethics and credibility should be an integral a part of the event of any product or analysis from the start. On the Kempelen Institute, we’ve specialists on ethics and laws who assist not solely researchers but additionally corporations in evaluating the dangers linked to the ethics and credibility of their merchandise.
We see that each one of us have gotten extra delicate. Philosophers and legal professionals take into consideration the applied sciences and provide options that don’t get rid of the dangers, whereas scientists and engineers are asking themselves questions they hadn’t thought of earlier than.
Usually, there are nonetheless too few of those actions. Our society evaluates outcomes based totally on the variety of scientific papers produced, leaving little room for coverage advocacy. This makes it much more crucial to create area for it. Lately, in sure circles, similar to pure language processing or recommender system communities, it has develop into normal for scientific papers to incorporate opinions on ethics as a part of the evaluation course of.
As AI researchers work towards innovation, they’re typically confronted with moral dilemmas. Have you ever encountered challenges in balancing the moral imperatives of AI improvement with the necessity for scientific progress? How do you navigate these tensions, notably in your work on personalised AI methods and knowledge privateness?
On the Kempelen Institute, it has been useful to have philosophers and legal professionals concerned from the very starting, serving to us navigate these dilemmas. We now have an ethics board, and variety of opinions is one in every of our core values.
Evidently, it’s not simple. I notably discover it problematic after we need to translate analysis outcomes into apply and encounter points with the information the mannequin was educated on. On this regard, it’s essential to make sure transparency from the outset, so we can’t solely write a scientific paper but additionally assist corporations innovate their merchandise.
Given your collaboration with massive know-how corporations and organizations, similar to ESET, how essential do you assume it’s for these corporations to steer by instance in selling moral AI, inclusivity, and sustainability? What function do you assume firms ought to play in shaping a future the place AI is aligned with societal values?
The Kempelen Institute was established based mostly on the collaboration of people with sturdy tutorial backgrounds and visionaries from a number of massive and medium-sized corporations. The concept is that shaping a future the place AI aligns with societal values can’t be realized by only one group. We now have to attach and search synergies wherever potential.
For that purpose, in 2024, we organized the primary version of the AI Awards, centered on Reliable AI. This occasion culminated on the Forbes Enterprise Fest, the place we introduced the laureate of the award – AI:Dental, a startup. In 2025 we’re efficiently persevering with the AI Awards and have acquired extra and better high quality purposes.
We began discussing the subject of AI and disinformation nearly 10 years in the past. Again then, it was extra tutorial, however even then, we witnessed some malicious disinformation, particularly associated to human well being. We had no thought of the immense affect this matter would ultimately have on the world. And it is solely one in every of many urgent points.
I concern that the general public sector alone has no probability of tackling these points with out the assistance of huge corporations, particularly as we speak when AI is being utilized by politicians to achieve recognition. I take into account the subject of trustworthiness in know-how, notably AI, to be as essential as different key matters in CSR. Supporting analysis on the options of AI fashions and their influence on people is prime for sustainable progress and high quality life.
Thanks to your time!