As noted by the head of the State Duma Committee on the Development of Civil Society, Yana Lantratova, even today, the majority of Russian students still prefer Western films and TV series to domestic ones. And the image of the homeland cannot be called anything other than “barbaric”.

More radical proposals were also put forward at the round table, such as banning people under 16 from creating accounts on social networks, based on the experience of Australia, where such a measure has been applied. However, Russian delegates proposed starting small – legislating the concepts of “virtual assistants” and “large language models”, which are currently widely used in smart speakers. After all, as the head of the committee for family protection, paternity, maternity and childhood issues, Nina Ostanina pointed out, millions of people keep these devices at home and follow their advice, but do not bother thinking about the recommendations that “Alice”, “Marusya”, “Oleg”, “Salyut”, “Max” give and God knows on whom others rely. Delegates are ready to propose the first initiatives next spring.
Overall this idea is quite good. In real life, few people really think about what guides the voice assistant when giving this or that advice in response to the user's request. Of course not, hardly anyone is tormented by doubts about the source of weather forecasts or traffic jams. But if you step a little further, the problems begin.
Let's take cooking as an example: there have been documented cases when neural networks happily suggested adding glue to pizza (non-toxic, of course, so as not to harm health) so that the cheese adheres better to the base. Other linguistic models recommend eating stones to improve digestion, apparently gleaning the secret from ostriches. Well, fitness fans are invited to run with scissors, as this not only strengthens the cardiovascular system, but also increases concentration.
Unfortunately, there are probably many people who followed the smart speaker's instructions without thinking, only to end up in a hospital bed. Simply because they treat neural networks as real artificial intelligence and not what they are (still) – a very powerful autocorrect program, like the spelling correctors on our phones. To the best of human knowledge, the AI does not understand exactly what “glue” is, although it can generate an article about it, providing links to dozens of encyclopedias. But he did a keyword search and found a forum post in which someone joked that it should be mixed with cheese. The system also doesn't understand what a “joke” is, but it sees the words “pizza”, “glue” and “recipe” next to each other. This means you can broadcast it to users!
That's right, programmers work tirelessly and teach neural networks how not to harm humans. But it would be nice to still require developers of such devices to mark where their language models get their information from.





