AI: Have We Found the Holy Grail or Got Caught in the Net?

AI: Have We Found the Holy Grail or Got Caught in the Net?

This issue of the Rector’s blog focuses on the ongoing digitalization of higher education and society. TSU Rector Eduard Galazhinskiy shares his opinion on the significance of this complex process and the ambiguous effects it causes.

— Professor Galazhinskiy, this is not the first time we are talking about digitalization, artificial intelligence (AI) and things close to them. It would not be an exaggeration to say that this topic has no limits since digitalization and the advent of artificial intelligence propose more and more new problems for society and higher education. Not all of these problems could have been foreseen in advance.

— Before talking about the problems that have emerged at the present stage of digitalization and the development of artificial intelligence, I would like to say the following. No matter how we feel about these processes, it is impossible to cancel or reverse them. It is also impossible to overestimate their importance for the entire society and each of us. And now it is absolutely obvious that it is these processes that determine the formats for the functioning of all spheres of today’s and tomorrow’s society, including public administration, economics, healthcare, education, mining and processing industries, agriculture, space, defense and security in general. We simply cannot afford not to seriously engage in digitalization and artificial intelligence if we do not want to become hostages of other people’s discoveries and technologies in this area, as well as all the possible consequences of their use. This challenge, driven by the need to create innovations related to artificial intelligence, is for us not only technological, but geopolitical and existential. Just like it was with nuclear technologies in the middle of the last century. I was finally convinced of this by the speech of our President Vladimir Putin at the International Conference on Artificial Intelligence and Machine Learning, organized by Sber and held in Moscow on November 20. The President noted the cutting edge, universal and revolutionary nature of artificial intelligence, which has become “a new page in the development of mankind.”

2-_-kopiya_novyy-razmer.png

That speech was also interesting and important because many of its points were directly related to science and education. For example, it was said that in the very near future a President’s decree would be signed and a new edition of the National Strategy for the Development of Artificial Intelligence would be approved. We expect it to introduce a number of significant changes, as well as specify goals and objectives aimed at expanding fundamental and applied research in generative artificial intelligence and large language models. The Government of the Russian Federation is ready to allocate additional funds for this research, provided that it is co-financed by leading domestic companies and that they create breakthrough products based on the results obtained by scientists. The President emphasized that high-rated Russian universities must ensure the training of staff and scientific developers who deal with artificial intelligence, expanding their master's and postgraduate programs in this field and increasing (additionally at the expense of the federal budget) the admission of students to the corresponding basic programs fr om September 1, 2024. A separate task that the President set for the Government and the AI Alliance Russia is the development of a special educational program on the theory and practice of development and application of AI with an emphasis on generative language models. The enrollers should be the heads of the largest Russian companies, federal and regional authorities, universities and educational institutions of secondary vocational education. At the same time, a careful analysis should be carried out to find out which industries will change the requirements for existing specialties in the next five years and require new professions and competencies. The education system should be given specific tasks to change the program of career guidance and training of specialists. All this, to one degree or another, concerns us greatly.

– As you know, our Tomsk State University has long recognized the priority of tasks related to digitalization, big data and artificial intelligence. What has already been done and is being done within these frontier areas?

– Indeed, “digitalization”, “big data” and “artificial intelligence” have been the key words of our intra-university discourse for many years. In the last year, the phrase “generative language models” has been added to them. All of them are heard during a variety of business meetings, events, discussions, partnership projects, and, of course, conducting relevant research and developing innovative products. What has been done over the years cannot be mentioned even briefly in one blog issue. If we turn to the news pages of the TSU website, we will see that in just one month our classical (!) university has conducted so many events that not every tech-universitiy focused on designated frontier areas can conduct.

A couple of years will pass, and artificial intelligence will become an organic element of literally all university processes: management and administrative, educational, research, sociocultural, and routine. But, as you know, people quickly get used to everything that makes their lives and professional activities more comfortable, and the same will happen in this case.

3-_-kopiya_novyy-razmer.png

– Don’t you think that at this stage of AI development, the attention of its developers should be paid not only to the opportunities, but also to the risks associated with its use?

- Undoubtedly. By the way, Vladimir Putin, during the Sber conference on AI, more than once “redirected” its speakers, who spoke only about the advantages of digitalization. For example, the statement that the complete digitalization of public administration in the country will inevitably lead to the widespread destruction of bureaucracy immediately raised a question whether it would lead to the birth of a new digital bureaucracy as well. In his speech, the President emphasized several times that the ethical and legal aspects of the use of new technologies should be constantly considered by the relevant government, professional and public structures.

– Having agreed, voluntarily or not, to the digitalization and “artificial intellectualization” of everything, including education, we now cannot help but respond to more and more of their challenges. Which one do you consider to be the most relevant at the moment?

– If we bear in mind higher school in particular, then, undoubtedly, one of the main and completely new challenges for it is the need to learn as quickly as possible to adequately function in conditions wh ere the generation of texts using artificial intelligence technologies such as ChatGPT, as well as the creation of various kinds of photo and video content becomes available to every researcher, university professor, student and applicant. This challenge clearly demonstrates the ambivalent nature of any technology. At first, as a rule, it is presented as an undoubted good, making people's lives easier, simpler and more pleasant. But very soon its negative traits begin to manifest themselves. They are multiplying and turning into a new acute problem, reducing almost all the advantages of this technology to zero. As a result, people's lives become much more complicated than before its appearance. Only a year has passed since ChatGPT became available for mass usage. A huge number of enthusiastic reviews immediately popped up on how easy and quick it is now for copywriters and content makers to write texts. However, in February, pretty bad news arrived from one of the Moscow universities: the first bachelor's qualifying thesis in management, generated with the help of ChatGPT, was defended. Surprising was not the fact of its appearance, since sooner or later it had to happen. It was that the university has created a precedent for defending such work, which, in my opinion, was absolutely wrong and short-sighted. And now, in the middle of academic year, we already understand what a serious problem we will have to face in at the end of it. Surely, these will not be isolated attempts on the part of students testing the strength of university systems in the checking and quality control aspects of their bachelor's and master's theses.

 The same problem awaits us in the upcoming admissions campaign in cases when applicants will have to complete some kind of written creative task.

4_novyy-razmer.png

– Is there a solution to this problem?

– I would like to hope that there is, but so far only one thing can be said with confidence: Russian universities, and society in general, have not yet truly realized the seriousness and inevitability of this problem. Such inertia in responding to negative manifestations associated with the latest technologies does not provide a chance for victory for now. The issue of banning the use of headphones by students during exams has still not been properly resolved, despite the fact that it was raised by the Ministry of Science and Higher Education back in 2021. Moreover, such a ban has its opponents. And I don't mean students. Some university administrators believe that blocking mobile communications could lead to accidents, such as test takers needing emergency medical attention. And, unfortunately, such facts have already happened. To avoid them, other administrators propose to develop a protocol for taking exams that is common to all, providing for the presence of an assistant examiner who constantly monitors the condition of students during the blocking of mobile communications. You can, of course, follow the Chinese path, when during mass unified state exams control over the behavior of examinees is entrusted to the military. But, given the speed at which new technologies emerge and spread, such solutions seem futile. What will we do, what protocols will we develop if, say, a fundamentally different type of communication appears, and it becomes possible to project texts and visualizations directly onto the retina of a student’s eye? Solutions of totally different nature need to be sought.

– Will the problem of the limits of using text generators like ChatGPT for universities turn out to be even more difficult than the problem with micro-earphones?

- Without a doubt it will. The main difficulty of this issue is that most of the learning process at a university is built on writing as reproductive and productive activity. The former reproduces (or generates) thoughts already expressed by someone, at some point and circumstance. For example, in lecture notes or textbooks. The latter is a product of one’s own thoughts and is directly related to one’s creativity and critical thinking. Developing productive writing skills is one of the most important tasks not only for schools, but also for universities. And it can only be solved by practicing through students writing many different types of texts. Ideally, a student’s ability to write productively should be maximally realized by the time he or she graduates and be reflected in their main text – the final thesis. Both in varying degrees of novelty and results to which they come, and in the nature of the author’s style of presentation, and therefore, thinking. Of course, all these are quite subtle things, given that there are also general requirements for the use of a universal scientific style, on the one hand, and special terminology, determined by the nature of the subject activity, on the other. However, until now, universities have dealt with all this one way or another. And now, thanks to the latest technologies, there comes a moment when everything can change dramatically, both for the better and for the worse.

– What can become better?

– For example, basic training in mathematics. Some time ago, our university had to deal with the fact that a high percentage of students were expelled due to low performance in mathematics. TSU faced this challenge and, along with the ENBISYS company, created the online platform Plario for adaptive learning. It quickly and effectively brings students up to the level they need in mathematics. This digital tutor takes into account the individual characteristics and offers students personalized content. Tests have shown that Plario helps to improve knowledge in just 8 hours, increasing it from 22 to 87%. Undoubtedly, similar platforms can and should be created for other disciplines. And, above all, this applies to natural science — physics, chemistry and so on.  

5_novyy-razmer.png

– In other words, humanities seem to be a “closed zone” for artificial intelligence?

- Not at all. Digital tutors based on AI can be in great demand, for example, when learning foreign languages. In addition, if we are talking about the need to systematize and summarize large amounts of data or texts, then this can be done using ChatGPT in sociology, history, cultural studies, or social psychology. In any subject area! I think that we still have little idea of the potential capabilities of ChatGPT. For example, if one keeps in mind that ChatGPT can reproduce the literary styles of famous writers and poets, then, accordingly, this is a very good tool for studying various author's styles.

But if we allow artificial intelligence to take over the creation of texts, which by definition should be productive, that is, original and creative, then disaster is inevitable. And it will happen not only in relation to a person’s ability to write texts independently, but also in relation to one’s cultural code, since its formation is possible only on the basis of the native language, oral and written speech. Since the late 1990s, people began to “outsource” their memory, using technologies that were revolutionary at the time — removable disks and flash drives, without really thinking about the consequences of this step. At first glance it seemed extremely convenient. And what do we see today? Young people under forty have extremely poor memory. They are not able to remember more than two or three numbers or words in a row, whereas until recently six or seven were considered the average norm. Historians, cultural scientists and anthropologists are sounding the alarm: very soon the collective memory of the people as the custodian of their cultural code might begin to collapse. If one day the electricity on the planet goes out and it is impossible to use technical devices, humanity will not be able to remember anything from its history. If we delegate the writing of productive texts to artificial intelligence, which it is in principle incapable of, since it learns from texts that already exist, then we will turn from people into “reproductions”, unable to think and express ourselves independently. We don’t need such outsourcing!

We need to develop critical thinking in students. And, paradoxically, the same artificial intelligence and ChatGPT can help to complete this task. To get the answer one needs, one must be able to ask the right question. Therefore, even ancient philosophers believed that the art of questioning is much more complex than the art of answering. ChatGPT can become a kind of simulator in the ability to pose questions. In addition, any generated text requires human verification to ensure the accuracy of the facts contained in it. It is known that modern text generators quite often offer plausible, but not real cases. That is, they can not only compile, but also falsify, create all sorts of fakes. And this is another important reason for the mandatory critical reflection of what AI produces. The teacher’s task is to explain to students that the same technology can both to do good and harm. A person makes a decision: whether to develop new competencies and abilities with the help of ChatGPT as a sparring partner or take the easy path, using ready-made generated texts and without self-developing in any way.

6_novyy-razmer.png

– How can ChatGPT help an educator today?

– The first thing that comes to mind is the development of various homework assignments, tests, questions for seminars  and exams. These include reviews of large amounts of data and texts with corresponding conclusions, which is extremely difficult for the teacher himself to do. And, of course, writing various kinds of reporting documents and just documents. They say that ChatGPT is excellent at composing memos, especially for bonuses. As such a referent, it will also be good for heads of university departments.

– Now let’s talk about the restrictions and risks. Are there currently reliable ways for educators to recognize generated texts that students pass off as their own?

– Perhaps very soon we will have new ChatGPT models that will help to recognize texts from the point of whether they were created by artificial or human intelligence. If we talk about today, I would like to be an optimist and hope that any conscientious and qualified educator, with good will, is able to distinguish a text generated by an artificial neural network from the original text of a student. The main and most relevant method for identifying the authorship of a text, in my opinion, can only be the “Socratic method”: a lively and fairly detailed dialogue with the student, during which it becomes clear how fluent the student is in the material.

But even if we imagine that absolutely all of our teachers are conscientious, qualified, will-oriented and committed to dialogue with students, the question still remains: how long will it take them to apply this method to each student? And how long should the defense of theses now last so that members of the state examination commission can properly evaluate all the works submitted to their court? Questions remain open. Not to mention the fact that some very respected experts, for example Igor Ashmanov, who has been working in the field of creating artificial intelligence technologies for more than 30 years, are not as optimistic as me, and generally believe that today there are no truly reliable ways to identify texts from the point of view of their writing by a person or GPTChat.

– It seems that Igor Ashmanov is, in principle, very skeptical about artificial intelligence. For him, this is, in fact, not “intelligence” at all, but simply another technological innovation with largely unpredictable consequences.

– And this is a correct and very responsible position: in the ocean of enthusiastic opinions and expectations from ChatGPT, the voices of skeptical experts, or rather realists, who think about all possible scenarios for the development of digitalization of education and society, should also sound loudly and persistently. It is not for nothing that Igor Ashmanov is a member of the Council under the President of the Russian Federation for the development of civil society and human rights.

Sober and harsh assessments of the situation with ChatGPT from domestic expert communities — IT, scientific and educational — help keep it in the zone of constant public attention as extremely problematic and unpredictable. If everything is left to chance, then the unbridled commercialization of ChatGPT technology will very quickly defeat all other trends related to its capabilities for real human development, and even common sense itself. We see what kind of collisions are happening today at OpenAI, the first company to develop ChatGPT. The interests of people seeking to profit as much as possible from ChatGPT collide with the interests of those who call for first studying in more detail possible scenarios for the widespread emergence of this technology into all spheres of society, and then pressing the start button. So far, the former ones are winning! We may not notice how our destinies will be completely decided by artificial intelligence, appearing before us in the form of a digital “judge”, “doctor”, or some kind of “expert”. In many respects, there has already been depersonalization of such areas as government, banking and trade services, housing and communal services, and so on. We talk on the phone and correspond not with people, but with bots. And the task of these bots, according to Igor Ashmanov, is not to help us, the clients, but to reduce the load on the help desks of the relevant organizations. And it must be said that these technologies are constantly being improved. It is becoming increasingly difficult to recognize them as non-humans, as their artificial speech becomes so smooth and even emotionally rich. And their advice and recommendations look and sound more and more logical and meaningful.

The rapidly developing practice of using ChatGPT also suggests that its different types, created by different development companies, at the same time may have different “worldview positions” or “pictures of the world”, which may subsequently change! Those who turn to neural networks for answers and advice should know and remember this.

–  What idea do you think is the key one in discussing the topic of the importance of AI for higher education?

– From the above, we can draw only one conclusion: the quality of education can improve dramatically only when artificial intelligence helps students, trains them, develops them, but does not do the work for them. It means it does not create intellectual products on behalf of students. We have received a powerful tool for the cognitive expansion of people and the development of their critical thinking, and at the same time a very risky technology. Here we can draw an analogy with an exoskeleton, the excessive use of which can lead to atrophy of muscles and joints, whereas it was intended to expand the physical capabilities of people. The challenge for universities is to design how to use artificial intelligence technologies to make students stronger and more responsible. Having developed certain procedures, universities should become centers for verification of products created with the help of artificial intelligence. But this story needs to be worked out both didactically and methodically with a reassembly of education and upbringing, taking into account new realities. At the

 we all understand that all programming for artificial intelligence is still done by people. It is they who must understand not only the advantages, but also the limitations and even the risks of new technologies. It is with this awareness that we must solve the problems associated with further digitalization and the development of artificial intelligence in our field, and respond to all possible challenges, which we will talk about more than once in this blog.

TSU Rector Eduard Galazhinskiy, Member of the Council for Science and Education under the President of the Russian Federation

The conversation recorded by Irina Kuzheleva-Sagan

You may also like