Top.Mail.Ru
Information Security as a Culture of Trust: Identity, Openness, Responsibility

Information Security as a Culture of Trust: Identity, Openness, Responsibility

“An ethic of responsibility is guided by the principle
that a person is accountable for the foreseeable consequences of their actions.”
Max Weber

 In this edition of the blog, TSU Rector Eduard Galazhinskiy invites us to look at information security fr om an unusual angle, as a culture of trust within the university. A university is an open and complex system: people and teams, partnerships, data, academic and personal communication. In such an environment, much depends not only on technology but also on norms, on the habit of checking, clarifying, and acting within one’s role. That is why the question of digital identity, who acts on behalf of the university and with what authority, becomes no less important than the question of academic openness: what knowledge we produce and publish, under what conditions, and with what degree of responsibility.

—    Eduard Vladimirovich, if we look at information security not only as a set of technical solutions but also as an element of university culture, wh ere, in your view, does the boundary of the university’s responsibility lie? Especially at a time when openness in science has become both a norm and a value? 

—    It is true that information security is not only a matter of technology but also a matter of culture. By its very nature, a university is an open and complex system. It exists through the exchange of knowledge, ideas, data, and the public visibility of research results. But openness is not the same as lack of control. It presupposes trust, and trust, as Niklas Luhmann wrote, serves to reduce social complexity. In the digital environment, that complexity multiplies, which means trust must be reinforced by responsibility.

The university’s boundary of responsibility does not run simply along the line between protected and unprotected, nor can it be reduced to the work of technical services. It lies in the realm of roles and authority. Every member of staff and every student, acting on behalf of the university, makes, whether consciously or not, a decision that may carry consequences.

In this sense, information security becomes part of university culture: a culture of understanding the other, a culture of verification, a culture of proportionality between authority and consequences. It does not stand in opposition to academic openness. On the contrary, it is precisely the responsible organization of access, publication, and presentation of results that allows openness to remain sustainable rather than vulnerable. So, this is not about drawing a hard line between freedom and control. It is about a mature balance, when each person understands their role within the wider system of trust and recognizes the limits of their authority. 

In recent years, the international agenda has increasingly focused on what is called research security. The point here is not to lock science away, quite the opposite. It is to learn how to see risks in advance when research may involve sensitive data, critical technologies, or vulnerable infrastructure, and to manage those risks carefully, without undermining the very nature of the university: openness, collaboration, and freedom of inquiry. Over the past few years, there has been a noticeable growth in approaches to creating the conditions required for this level of security. Universities and governments around the world are redesigning policies and rules for the exchange of research data precisely because the conditions have changed and the cost of mistakes has risen.

For reference:

The higher education sector is genuinely at the center of cyber risk worldwide, and this is confirmed by independent sources using different methodologies. In the United Kingdom, for example, according to a government survey, 91% of higher education institutions reported identifying cyber incidents or breaches in the previous twelve months. In the global context, Check Point Research found that in 2025 education consistently remained the most targeted sector: fr om January through July, organizations worldwide faced an average of 4,356 attacks per week. And according to the Verizon DBIR 2025, which analyzes incidents and data breaches across 139 countries, the education sector accounted for 1,075 incidents and 851 confirmed data disclosures, with system intrusion, human error, and fraudulent activity among the dominant causes.

Source: https://www.gov.uk/government/statistics/cyber-security-breaches-survey-2025/cyber-security-breaches...

2_1200 (4).jpg

—    How can we even talk about information security through the lens of digital identity and academic openness? At first glance, these seem like different topics, even different levels of discussion.

—    Yes, at first glance it may seem that we are dealing with two separate topics. But in fact, they are united by the same fundamental question: the question of trust and responsibility. Digital identity is, above all, an answer to the question of who is acting and in what capacity. In an offline environment, much of trust is created by context: physical presence, voice, professional reputation, the recognizability of a role. In digital space, many of those signals disappear or become less obvious. They are replaced by something else: formalized roles, verified authority, and an auditable trail of actions. That is why digital identity cannot be reduced to a login and password. We are talking about a more complex architecture of trust: how a person’s connection to a role is verified, how authority and responsibility correspond to one another, how interaction is structured between people and digital services, and how the university manages the risks that arise at every stage of that interaction. That is precisely why, in international practice, digital identity is described as a managed risk model: a system in which identity verification, confirmation of authority, and the balance between convenience and security are treated not as separate matters but within a single logic. 

Academic openness answers a different but equally fundamental question: how knowledge becomes public and what happens to it after publication. In the modern world, the publication of a research result is not the end but the beginning of a new life cycle. Research outputs enter databases and repositories, become linked to other results, are interpreted, reused, and sometimes applied in entirely different contexts. In that situation, the university bears responsibility not only for the quality of the knowledge itself but also for the conditions under which it circulates. Openness does not mean the automatic removal of all restrictions. It requires a considered choice of access regime, a proportional relationship between publicity and possible consequences. And that choice is always made by specific people acting in specific roles. 

This is wh ere digital identity and academic openness converge: the former answers the question of who is acting and with what authority, while the latter answers the question of what knowledge enters the public sphere and under what conditions. In both cases, what matters is mature trust management, not a false opposition between freedom and control. 

—    And yet it still sounds like a very technical conversation. 

—    It may appear technical in form, but in substance it is a managerial conversation. A university is one of the most complex organizational systems in the digital environment. Different roles coexist here simultaneously: applicant, student, alumnus, teacher, researcher, administrator, partner. Each role comes with its own tasks, powers, and sphere of responsibility. At the same time, these statuses are constantly changing: a student becomes a graduate, a staff member moves to another unit, a research project begins and ends, a partner joins for a fixed period. This dynamism creates a dense fabric of overlapping roles. And the university’s task is to ensure that this fabric remains coherent: that authority matches tasks, that access rights are proportionate to responsibility, and that changes in status are reflected in digital systems in a timely way. 

In complex structures, serious incidents often begin with something small: disproportionate access, a role left in place out of habit, unverified authority. This is not about distrusting people. It is about protecting people themselves and their professional reputation. In a digital environment, trust must be institutionally structured: confirmed through role, proportionate to task, and upd ated in a timely manner. This helps prevent situations in which a person ends up being held responsible for actions carried out in their name. That is why access rights should be defined by task rather than by formal status. And each role must be confirmed as promptly as it is revoked once the grounds for it no longer exist. 

—    Then the next question is this: is the very logic of security changing? It used to seem that it was enough to build a perimeter: if there is a network boundary, everything inside is protected. 

—    The idea of the perimeter really was dominant for a long time. An organization was imagined as a space with a clear boundary: insiders on one side, the outside world on the other. But the university has long existed in a different reality. The contemporary academic environment is distributed: people work fr om different locations, projects are carried out with external partners, and a wide variety of digital services are used. In this situation, boundaries become less physical and more functional. That is why the logic of security is gradually shifting fr om guarding the outer perimeter to managing access within the system itself. This is about proportionality: access is granted not simply on the basis of belonging to the organization but in accordance with the task, the role, and the current context of work. 

In international practice, this approach is often referred to as a zero-trust architecture. But it is important to understand the meaning of that term correctly. It does not mean total distrust. It means that trust ceases to be automatic and tied to location; it becomes a procedure, something that is verified and renewed as needed. In other words, security ceases to be a wall and becomes a system of managed decisions. 

—    What does that mean for an ordinary staff member: a lecturer, a researcher, an administrator? 

—    It means several basic things. First, in the digital environment we do not act only as private individuals but as bearers of a particular role. That means that within the working environment, personal convenience does not take priority over the proportionality of authority to task. Access rights, communication channels, the volume of information available, all of this should correspond to the person’s role at a given moment. This approach does not weaken trust; it structures it. 

Second, faster does not always mean more effective. When we save a few minutes by bypassing established rules, we often shift the problem into the future. That is how what one might call risk debt is created: an accumulation of managerial risks and deferred problems that have not yet manifested themselves but already exist. 

Risk debt is made up of small things: excessive access granted just in case, work files sent through channels not intended for official information, saved passwords, decisions taken in the spirit of let’s do it now and sort it out later. As long as everything is functioning normally, it seems that nothing serious is happening. But when a staff member’s status changes, during an internal review, after a technical failure, or in a dispute over access rights, it is precisely these small things that become the source of systemic problems. 

Third, there are practices that may seem minor but have a disproportionately large effect: multi factor authentication, regular review of access rights, careful handling of work accounts, and a clear procedure for any unusual situation. This is not bureaucracy for bureaucracy’s sake. It is a way to preserve the resilience of the university environment and protect trust in it. 

For reference: 

Both international recommendations and Russian state standards understand digital identity not as a one time check but as a managed representation of a person within a system through a se t of attributes by which they are distinguishable and to which their actions and responsibility are linked. Fr om this follows an important distinction in terminology. Identity is the profile: what the system knows about a person. Identification is the step in which the person states who they are within the system, for example by presenting an identifier such as an account, number, or address. The next step is confirmation that the person is in fact the legitimate holder of that account, what in everyday speech is often called verification, though in formal documents it is treated separately fr om identity itself. Only after that does the system determine which actions are authorized. 

The Digital Identity Guidelines in the 2025 edition place particular emphasis on a risk based approach and take into account new forms of deception in which what is attacked is not so much the technical layer as trust itself, including through fabricated voice and video. A simple real life example: in 2024, a case was reported in Hong Kong wh ere an employee was persuaded to transfer a large sum of money after a video meeting with management that turned out to be an AI fake. 

—    But what does any of this have to do with university openness? Identity seems to be about access to networks and services, while openness is about publishing research results. 

—    The connection is direct. Contemporary university openness is impossible without an infrastructure of trust. Repositories, platforms, access to data and publications, collaborative research projects, all of this works only if the participants clearly understand who is acting and in what capacity. The higher the level of openness, the greater the demands placed on the precision of digital identity. It is not enough simply to grant access. It is essential to understand to whom access has been granted, under what conditions, on what grounds, and what the zone of responsibility attached to that access is. Openness without manageability turns into vulnerability. 

This is especially visible in the research environment in the model of federated identity: when a digital profile, verified by the university, allows a staff member to work in external services such as library systems, research platforms, and shared infrastructures. This approach makes interaction convenient and scalable, but it presupposes mature governance of roles and authority. The same logic applies within the university. A work account is not just a means of entering the system. It is confirmation of a role through which a person gains access to educational and research resources, electronic libraries, and internal services. And that is precisely why managing that role, the timely updating of status and the correctness of access rights, becomes a matter of principle. 

More broadly, the question of openness inevitably leads to the question of data. At the university, data is not merely a technical resource but the basis for managerial and academic decisions. It underpins educational processes, research, and strategic conclusions. And data is almost always connected to people, either directly or through the consequences of its use. So the question of how we handle data is, in essence, a question of responsibility: responsibility for proper access, for interpretation, for the conditions of publication, and for the possible consequences of openness. 

Readers of the blog may feel that I am repeating myself at times, just in different words. I am doing this deliberately, in order to anchor the most important aspects of today’s topic. 

—    What exactly do you mean by responsibility? Data security is usually understood as preventing leaks. 

—    Leaks are only one of the possible risks. Responsibility in working with data is much broader. In the classical understanding, data has three basic properties: availability, integrity, and confidentiality. It is important not to absolutize any one of them. If we think only in terms of restriction, we risk damaging availability and making our own work more difficult. If we focus exclusively on convenience, control and quality suffer. So this is not about maximum protection but about a proportionate regime for handling data. In the university environment, data differs by its nature: personal, administrative, research related. Each type has its own life cycle, its own restrictions, and its own risks. Access to data should be determined by task and role, not by the principle of let it be available just in case. Exactly as much as is needed to do the work, no more and no less. That thought may sound obvious, but it is precisely what makes the system manageable. 

With research data, the picture is more complex: here the demands of openness, reproducibility, partnership, and legal restrictions all intersect. To keep things clear, it is important to avoid unnecessary complexity wh ere it can be avoided. If data can be anonymized, if public and non public parts can be separated, if different access regimes can be established, then that is what should be done. Openness can be layered: fully open publication, restricted access, or access upon request. What matters fundamentally is that the chosen regime be deliberate and explainable. And for these processes not to degenerate into formal bureaucracy, data must have an identified steward and a clear context: who collected it, for what purpose, on what grounds, wh ere the original version is stored, who has access, and how changes are recorded. Then, a year later or several years later, the result can be reproduced, justified, and not lost in a thicket of versions and accidental copies. 

—    And what is the university’s role here? This cannot be only about the personal care of individual staff members. It is also an institutional responsibility. 

—    The university’s role is to give people not just rules but working routes and support: wh ere to go, how to coordinate complex cases, what our procedures are for storage and publication, what tools are available. And it is important that these routes be shorter and simpler. That removes the temptation to do something however it happens. 

—    Fr om the standpoint of information security, what is the most valuable and the most vulnerable aspect of academic openness today? 

—    The value is the same as ever, and I have already mentioned it: verifiability, reproducibility, and the speed of academic exchange. The vulnerability lies in the fact that publication in the digital age has become much richer and more complex fr om the standpoint of its technical underpinning.

For reference: 

As a rule, alongside the text of a research article there are also published source data, or part of it, cleaned datasets, processing protocols, analysis scripts, software code, trained models, experiment parameters, instructions for running the work, and sometimes containers, computational environment configurations, and access to the infrastructure without which the result cannot be reproduced. A research finding becomes not only something to be read but something executable: it can be checked, reproduced, reused, and serve as the basis for a new result. 

3_1200 (2).jpg

Of course, this increases the value of the material: the more complete the package, the less one has to trust on word alone and the more real verifiability there is. But at the same time the cost of a publication error rises: one extra file, an uncleansed fragment of data, an exposed access key, an accidentally retained participant identifier, and academic openness turns into a leak, a legal risk, or reputational damage. That is why the more complex and complete publication becomes, the more important precise decisions become about what exactly enters the public sphere, in what amount, under what restrictions, and in what form. 

In essence, the boundary between the need to publish and the obligation to protect data lies in the maturity of the researcher’s decision making. Academic openness today requires prior assessment: who is affected by the data, whether there are contractual restrictions, ethical constraints, participants’ rights, intellectual property issues, or potential sensitivity in the results. Special care is required in preparing materials for publication on dual use technologies, and more broadly on any breakthrough technologies, whether they belong directly to military and technical spheres or to the humanities. A researcher may, without realizing it, move beyond the bounds of the judgment that the content of the publication does not fall within the sphere of state secrets, and that may end badly for them. All of this belongs to the zone of the scholar’s professional responsibility, and it begins not at the moment of publication but earlier: at the stage of conceiving and designing the research. What data are we collecting, on what grounds, how will we store it, what can we make open, what will require anonymization or another form of access, what obligations will the team have toward participants and partners? If these decisions are made in advance, publication becomes a natural continuation of the work rather than a last minute dilemma.

—    So we are talking about publication hygiene. 

—    Exactly. I will repeat myself once more: there is a set of questions that need to be asked before the results of research work move into the university’s external environment. What exactly are we publishing, and can the material be divided into parts with different access regimes without losing scientific meaning? Do the results contain personal data, commercial secrets, partner restrictions, grant obligations, or conditions attached to participant consent? Can the data be anonymized and described safely in such a way that it remains useful without harming people? Who at the university should help make the decision if the case sits on the borderline? In a mature system, the most difficult questions involving serious potential risks are not left to an individual’s personal discretion. That is why expert committees function at nearly all faculties to determine whether a prepared article falls within the sphere of state secrecy or not. Unfortunately, these committees sometimes work formally, without truly engaging with the content of future publications, and sooner or later that can lead to unnecessary incidents. I would ask colleagues to pay close attention to this and draw the appropriate conclusions. 

—    But is that not what people call censorship? 

—    Of course not. Censorship is prohibition on external grounds. Publication hygiene is the calibration of a regime of responsibility. Openness remains a value, but it must be shaped in a way that does not undermine trust in science or in the university. 

—    If publication today is not the final point but the start of a result’s life cycle, what happens next? Wh ere does science live after an article is published? 

—    After publication, a scientific result enters a new phase of existence. Quite quickly it becomes part of the digital environment: it is indexed, linked to other works, enriched with metadata, and incorporated into various analytical and information systems. An article, a dataset, software code, a preprint, all these are no longer just a text or a file. They are elements of a broader ecosystem in which results are compared, grouped by topics and authors, used in reviews, analytics, and managerial decisions. In that sense, science exists not only as a body of research but also as information about science itself: information about authors and their affiliations, grants and projects, thematic areas, citation, and collaboration. It is fr om this data that a full picture of scientific activity is built in national and international space. Incidentally, in Russia this logic has led to the creation of the Science and Innovation domain, wh ere all this information has been accumulated. 

And here a fundamental fork appears. One can treat this digital environment as an unavoidable background in which one simply has to work. Or one can understand that the accessibility and transparency of research information are a strategic issue for the university. The way this environment is structured determines who will be visible and how, which directions will stand out and which will become less discernible, and which decisions will be based on transparent analytics and which on opaque algorithms. 

For reference: in 2024, the Barcelona Declaration on Open Research Information was adopted. Its key idea is to ensure the openness and verifiability of the data on the basis of which research activity is evaluated and managerial decisions are made. This is no longer only about open access to publication texts but about the openness of research information itself: metadata, indicators, and infrastructures through which science becomes visible. This is an important institutional shift. It moves the conversation fr om the question of what we publish to the question of how the environment is structured in which science becomes visible and assessable. 

For reference: 

Although the university does not control the global platforms themselves, it can control the foundations on which trust in research results rests: identifiers, connectivity, and accounting rules. For a researcher, one such anchor is ORCID, a persistent international identifier that helps correctly connect a person with their contributions and affiliations across different systems. For research outputs, there is DOI and metadata. Their role is not in the journal number but in stable addressability and connectivity, allowing work to be easily found, correctly cited, and linked to versions, corrections, and related objects. For data and other research outputs, the requirements of responsibility are even higher: the DataCite infrastructure, for example, emphasizes that the organization assigning the DOI remains the responsible steward, obliged to maintain both the content and the metadata descriptions. Finally, a university can deliberately choose the sources on which it relies for analytics and evaluation: around the world there is growing movement toward open catalogs and open knowledge graphs as alternatives to fully closed platforms. OpenAlex, for example, presents itself as an open catalog linking publications, authors, organizations, and funding, and making those data available for use and analysis.

 —    At first glance, all of this sounds more like questions of accessibility and infrastructure. Wh ere, then, is information security? 

—    It manifests itself in two fundamental dimensions that were not often linked with information security in the past. The first is the security of the visibility of science, that is, the security of the research information that universities and research institutes place in open access. If research information is opaque and cannot be verified, the university becomes dependent on external interpretations: on how metrics are calculated, how analytical connections are built, how inaccuracies are corrected. And errors are inevitable, fr om incorrect affiliation to a distorted picture of particular scientific areas. In this context, openness of research information becomes not simply a principle but a mechanism of self-correction: transparent data is easier to verify, clarify, and reproduce. 

The second dimension is the security of trust in the scientific corpus itself. Contemporary scholarly communication is facing a rise in unethical publication practices, fr om formal breaches in peer review procedures to systemic distortions in publication data. This is no longer only a matter of publishing policy. For universities, it is a matter of reputation, the quality of the academic environment, and the soundness of managerial decisions based on publication analytics.

 That is why scholarly communication must be treated as a managed system. Metadata quality, correct work with identifiers, support for authors, transparent correction procedures, careful analytics, all of these ceases to be an auxiliary function. It becomes part of the resilience of the university as an institution of trust. In this sense, libraries, repositories, and expert structures are not external support for science. They are science’s own infrastructure, ensuring precision, reproducibility, and trust. 

—    Eduard Vladimirovich, we have spoken about identity, data, and the platform environment of science. But it still feels as though the human being remains at the center of the problem, as is so often the case in any seemingly technological issue. Wh ere does the humanities fit here? And why does the conversation about information security feel incomplete without it? 

—    Because information security begins not with technology but with interpretation. It begins with the way a person understands a situation and decides: whether to trust or clarify, to speed the process up or pause, to act out of habit or to verify the grounds. In the digital environment, vulnerability emerges not only at the level of technology but also at the level of interpretation, through context, tone, appeal to authority, or a sense of urgency. That is why risk arises wherever a person acts automatically, without relating their action to their role and responsibility. Here, the humanities become fundamentally important. They cultivate the ability to distinguish motives, feel the context, notice mismatches in language and intonation, and understand the consequences of one’s own decisions. No instruction can replace the ability to engage with a situation thoughtfully. 

At Tomsk State University, over the past couple of years, dozens if not hundreds of staff members have encountered this problem after receiving messages, voice messages, and even phone calls allegedly fr om the rector. At times other quite recognizable university officials were also impersonated. The paradox is that these were among the most convincing communications possible: the right tone, the right words, a familiar manner of speaking. Clearly, these were deepfakes. What was being tested was not so much the systems as the people: their trustfulness, their respect for authority, their habit of carrying out requests fr om above without questions. But this is precisely wh ere the key skill of digital hygiene should come into play: the pause. Clarification through an official channel, verification of authority, refusal to act urgently and immediately without confirmation. 

In general, practice has shown that resilience is determined not only by system settings but also by culture of behavior. When the university regularly reminds people about communication rules, about the fact that sensitive matters are not handled in unofficial channels and are not accompanied by pressure of urgency, this gradually becomes the norm. And perhaps the main conclusion here is that the best protection is not a new interface feature but a mature attitude toward one’s own role. If a message demands immediate action and appeals to authority, that is not a reason to move faster. It is a reason to stop and verify. 

—    So in your view, risk is not only technical but also communicative? 

—    Yes. A university cannot exist without trust and communication: we are constantly coordinating, running joint projects, exchanging data, discussing research, giving public talks. Any system built on communication is vulnerable to failures of meaning. So I would put it this way: digital security is part of a broader culture of responsibility. It stands alongside academic integrity, the ethics of working with people in research, respect for authorship, and the norms of public speech. If we recognize that the university is responsible for the quality of knowledge, then we cannot refuse responsibility for the quality of communication around that knowledge. Strange as it may sound, information security is always to a great extent a humanities task. It is connected to how responsibility is structured, how people understand their roles, how they speak to one another, and how capable they are of doubt and clarification. There is, however, an important nuance here: security must not turn into paranoia. The university cannot shut itself off and stop communicating with its many audiences. That would contradict its mission. Our task is not to interrupt communication but to improve its quality. Not to prohibit openness but to give it form. Not to fear mistakes but to create practices in which mistakes are caught early and do not become catastrophic. 

For the digital environment not to turn into a trap, the university must cultivate in its people, staff and students alike, the skill of distinguishing. To distinguish role fr om personality, information from data, opinion from fact, what is open from what is sensitive, urgency from manipulation. That is what determines the resilience of any organization in the face of cyber threats today. 

5_1200 (2).jpg

—    That seems like a good transition to the practical part of the conversation: what simple, clear rules and action routes should an employee have for everyday use so that this culture of distinction works not only in words but in reality? 

—    I would begin with something simple: an employee does not need to know every possible threat. They need to know a few stable rules and one clear route of action for when something goes wrong. 

The first rule is to pause. If a request is unusual, if it concerns access, data, money, documents, installations, or urgent confirmations, a pause helps establish the first barrier. In a digital environment, the costliest mistake is usually made at speed. 

The second rule is not to take repeated identification procedures and requests for confirmation as a personal insult. There is a great deal of respect and trust within the university, and that is right. But digital security works differently: what matters is confirming that this particular request really comes from someone who has the authority to make it. 

The third rule is minimally sufficient action. Even if a request appears logical, we do not broaden access just in case, we do not send more than is necessary, and we do not take steps that cannot be reversed. We act strictly within the scope of the task and only after verification. 

The fourth is to change passwords every month or two, keep antivirus software up to date, and remain alert to phishing links. 

Of course, people differ in how sensitive they are to unusual requests and in their degree of responsibility. To understand whether a request requires a response, it is important to know the basic red flags. For example, one should be wary of sharp urgency, this needs to be done right now or else, requests to bypass procedure, let’s do it without paperwork or just make an exception, unusual secrecy, don’t tell anyone, an unexpected change of channel or contact, and any attachments, links, or files that demand that you open them quickly and do something. 

As soon as even the slightest suspicion arises, it is essential to contact specialists in the relevant unit or another department. There is no need to investigate on your own. One should stay calm, stop taking further action, record what happened, the time, what was seen, what the request was, and pass it on to the service responsible for information security. But if a mistake has already happened, the worst thing is to remain silent out of fear. In security, the cost of concealment is almost always higher than the cost of an honest report. We deal with risks professionally; we do not hunt for culprits and punish them in the heat of the moment. 

A culture of information hygiene must be supported not only at the level of university leadership but also at the level of unit heads. If a manager themselves creates a permanent regime of urgency and asks people to bypass procedures because it has to be done, they nullify any security regulations. But if a manager instead explicitly says that verification is normal, that a pause after a request is acceptable, and that doubt is a sign of professionalism and critical thinking, then security ceases to be someone else’s duty and becomes part of normal work. 

There is one more thing that managers tend to underestimate: staff access rights and roles should be reviewed regularly. Not because someone might misuse them, but because the university lives through change. Projects end, people move, tasks change. If roles are not upd ated, the organization begins to accumulate digital chaos, and chaos is a poor environment for the trust to which we keep returning today. 

—    And are there specific rules for research work? 

—    Yes, and first of all they are connected with attentiveness to one’s own work. Before publishing data, it is important genuinely to stop and ask oneself several questions: what can be opened immediately, what requires anonymization, what needs to be coordinated, and what would be wiser to release later or under restricted access. Responsibility is built from these decisions: responsibility toward research participants, toward partners, toward colleagues. We are responsible not only for the result itself but also for the consequences of its publication. 

There is another thing people rarely think about. Over time, even the author may find it difficult to remember what exactly lies behind a given se t of files: wh ere the data came from, what filters were applied, what assumptions were made. That is why recording the provenance of data and the conditions of its processing is not a formality but an element of research culture. It is precisely such small things that make it possible a year or several years later to reproduce a result and calmly explain its logic. 

And there is one more principle: to make use of the university’s institutional capacities. Repositories, consultations, publication review, the habit of placing materials in the correct form and on the appropriate platform, all of this is not excessive bureaucracy. It is a way to make the scientific result more resilient, more intelligible, and better protected against accidental distortion.

 —    And what if the staff member is a reviewer? That also involves confidentiality and responsibility. 

—    Yes, and here the responsibility is especially high. As Deputy Chair of the Higher Attestation Commission, I can say that today one of the most vulnerable points in scholarly communication is peer review done with the help of ChatGPT. We see AI entering this area as an everyday convenience tool. Unfortunately, many reviewers delegate too much of their authority to it. 

For reference: 

According to Frontiers, based on a global survey of 1,645 active researchers, about half of reviewers already acknowledge using artificial intelligence tools in peer review, and this use is often not disclosed. The problem is not the mere fact of AI assistance, but its function: it is one thing to tidy up the wording of a review, structure comments, and improve clarity of expression; it is quite another effectively to outsource professional judgment to a model, asking it to assess novelty, methodological soundness, or the significance of the contribution. That is wh ere the risk to trust arises: the reader encounters a confident tone but cannot tell wh ere the reviewer’s competence ends and automatic generation begins. 

Source: Most peer reviewers now use AI, and publishing policy must keep pace

As one sign of the scale of the phenomenon, the team at Pangram Labs published an assessment based on a large corpus of reviews. Roughly 21% of the reviews appeared to be fully generated, and more than half showed signs of AI involvement. Even allowing for the limitations of any detection method, this was an estimate by an external team, not an academic study, the trend is practically important: peer review is becoming a zone of soft erosion of trust, through AI hallucinations, fabricated references, mechanical objections, and persuasive stylistics without any real understanding of the work’s contribution to science. 

Source: Pangram Predicts 21% of ICLR Reviews are AI-Generated

It is important to draw a clear boundary: AI may be a tool for formatting, but not a tool for decision making. It can be used to put the text of a review into shape, to structure comments, to smooth linguistic roughness, but professional judgment about novelty, the soundness of argument, and the quality of method cannot be handed over to a language model. And the second principle is this: the confidentiality of the manuscript matters more than convenience. Any tools used must comply with the policy of the publisher or conference, and with the rules governing the handling of materials. 

When an expert inserts a manuscript or fragments of it into an external service, they are effectively transferring the material to a third party. In some cases this may conflict with confidentiality requirements, copyright, or internal regulations. So the key principle remains the same: convenience must not outweigh professional responsibility. Peer review is not only expert evaluation but also an obligation to preserve trust in the scholarly process.

 —    If you were to sum it all up, how would you formulate the main message of this conversation? 

—    That information security is not a topic of anxiety but a topic of maturity. The university cannot and should not become a closed system. Our strength lies in openness, in the exchange of knowledge, in trust. But for that very reason, we are obliged to be precise in the way that openness is structured. Digital identity is about answering the questions who is acting and in what role. Academic openness is about answering the questions what we publish, under what conditions, and what consequences that may have. And the boundary of responsibility runs precisely wh ere we stop relying on chance and begin relying on culture and discipline: on rules, on routes of action, on the habit of checking, and on respect for data and for people. 

I would like a simple norm to take root in our university: security is not a separate discipline for specialists but part of everyday professional literacy. Like academic integrity, like responsibility for one’s words, like respect for the person. We can be as open as possible and at the same time as careful as necessary. This is not a compromise. It is the contemporary standard of trust.

 

Eduard Galazhinsky

Rector of TSU

Member of the Council for Science and Education under the President of the Russian Federation

Vice President of the Russian Academy of Education

Vice President of the Russian Union of Rectors

Deputy Chair of the Higher Attestation Commission of the Russian Federation

 

Interview recorded and reference materials compiled by Irina Kuzheleva-Sagan


You may also like

15.10.2025

Russian as a Bridge of Trust

Tomsk State University is preparing to open Russian language testing centers in Tajikistan. This is not about ...