Artificial intelligence and technology. A new beginning
Preface
Observing the current development of technology, particularly artificial intelligence, we are witnessing a civilizational leap unprecedented in the history of our species. The pace of this technology’s development is so rapid that the majority of people living on our planet are not yet fully aware of what is truly happening. The progress in developing artificial intelligence clearly suggests that it is an exponential growth, yet due to the dynamic nature of this growth and its presence in numerous fields, terms like „double exponential growth” are used, which serve as a more abstract description to emphasize the complexity of a situation humanity has never encountered before.
Sometimes it may seem that the implementation of modern technologies into the plant cultivation sector is slow and delayed. However, observing the last few years, it can be emphasized that many of these agrotechnologies are waiting in the shadows for the right opportunity to emerge. For many of them, the key solution may be the implementation of broadly defined artificial intelligence, which will support various devices in achieving autonomy in performing specified tasks. Key to the progress in agrotechnology may also be a significant reduction in costs at the creation stage, as artificial intelligence significantly lowers costs on many levels, which is particularly important in this sector, where companies often lack sufficient resources to effectively develop their projects and successfully implement them. In the near future, we can also expect the emergence of a much larger number of startups, which will increase the number of available solutions in the market.
This is undoubtedly an opportunity for the entire community involved in plant cultivation to make another leap towards building modern systems, as well as catching up with other industries in the use of the most advanced solutions.
Due to the complexity, the speed of development, and the encompassing of practically all aspects of our lives, we cannot predict the exact consequences of current events related to artificial intelligence even a year ahead, not to mention the perspective of the next few years. At the same time, continuous attempts are being made to create the first quantum computer in history, which potentially could be another „game-changer” for our species and this time open entirely new possibilities for artificial intelligence as well.
Until recently, current events were only a fantasy in human minds, but they have now become reality, and emerging futuristic films related to artificial intelligence no longer seem like such a distant vision of our world. This can best be summarized with the words, „we live in interesting times.”
Therefore, we are left with nothing but to immerse ourselves in the fascinating world of artificial intelligence and the technologies that are developing alongside it.
Definition and History of Development
Our species, Homo sapiens, has a penchant for defining everything possible, which is undoubtedly understandable, as it is primarily an attempt to understand everything that surrounds us. However, this activity presents us with many problems. The definition of artificial intelligence is no different, and the discussion around it will continue as long as our way of thinking remains unchanged, because that is simply our nature. Current discussions are merely a collection of individual opinions and thoughts, where one might only believe they are right. To support this thesis, consider the definition of time, which, despite having a certain description and assigned characteristics, is still not fully understood. Referring to the topic of plant cultivation, we could mention the attempt to define biostimulants, for which, despite many attempts, every definition raises doubts about its validity. Yet for legal reasons regulating this sector, we must adopt a certain definition. So, I would not get too accustomed to the definitions of artificial intelligence given below, but treat them only as current descriptions that facilitate a better understanding of AI-related issues. Unfortunately, this is just the beginning of the problems associated with this topic, as we can also question and discuss the definition of „intelligence” or „consciousness” themselves, which are fundamental to the discussion on AI. Starting from a somewhat pessimistic approach to this topic, let’s try at the outset to give a definition fitting both intelligence and artificial intelligence, consisting of just three words: „ability to perceive analogies.” Now, we can calmly immerse ourselves in the daunting task of defining artificial intelligence.
The term most commonly heard today is artificial intelligence, for which the abbreviation „AI” is frequently used, derived from the English words „artificial intelligence.” This term is often somewhat overused, used to describe everything related to artificial intelligence, including machine learning (ML), deep learning (DL), neural networks (NN), and deep neural networks (DNN). However, before we define artificial intelligence, it’s essential to define its constituent elements.
Machine learning is currently considered a branch of AI, focused on creating systems trained using datasets that can take the form of not only text but also sound or images. The learning process involves developing algorithms that enable the discovery of patterns in datasets, allowing tasks to be performed without being explicitly programmed to do so. In supervised learning, the model is trained on a dataset with labels, meaning the data includes the correct answers. Unsupervised learning, on the other hand, does not involve labeled data, and the model independently tries to identify patterns and relationships in the data. Reinforcement learning can be compared to training dogs, where the algorithm learns to make decisions by performing certain actions for which it is „rewarded” or „punished,” thus learning to take optimal actions.
Neural networks are often presented as the foundation of modern artificial intelligence, enabling machines to mimic human cognitive and decision-making abilities. Within the context of artificial intelligence, they represent a key concept, drawing inspiration from the structure and functioning of the human brain. Neural networks are a computational model that emulates the way natural neural networks, such as those found in animal brains, process information. The basic unit of a neural network is the neuron, which is a processing element modeled after biological neurons. In living organisms, neurons are specialized cells that transmit information through electrical and chemical signals. Each neuron consists of a cell body, dendrites that receive signals, and an axon that transmits signals further. In artificial neural networks, 'neurons’ are computational units that simulate the function of real neurons. Although significantly simplified compared to their biological counterparts, the fundamental principle of operation – receiving, processing, and transmitting information – remains similar. In artificial neural networks, they receive input signals, process them using a specified mathematical function, and then send an output signal to other neurons. Communication between neurons occurs through connections called synapses, which in artificial models are represented by weights determining the strength of the connection. One of the most fascinating features of the brain is its plasticity, or the ability to change and adapt.
Frank Rosenblatt’s 1958 publication, „The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain,” is a milestone in the history of artificial neural networks. In this work, Rosenblatt presented the perceptron model, an early version of the neural computational network. He described how artificial neural networks could learn and process information, introducing concepts such as weights, activation functions, and learning processes. This work is considered one of the fundamental contributions to the field of neural networks and machine learning. Among more recent publications on the functioning of neural networks, the 2016 book „Deep Learning” by Ian Goodfellow, Yoshua Bengio, and Aaron Courville has gained considerable acclaim. This book is recognized as one of the most important and influential works in the field of deep learning, focusing on presenting the theoretical foundations of neural networks, including deep convolutional and recurrent networks, and their applications in various fields.
Learning in the brain occurs through changes in synaptic strengths, allowing for memory and learning from experiences. In artificial neural networks, learning also involves adjusting the weights of connections, which are the basis of machine learning, but this process is controlled by algorithms and based on the analysis of large data sets. In living beings, synapses are microscopic spaces between neurons through which chemical or electrical signals are transmitted, depending on the type of synapse. This is where the crucial process of learning and memory takes place. In artificial neural networks, 'synapses’ are connections between artificial neurons. While in the brain synapses can strengthen or weaken signals, in artificial neural networks, this is done by changing connection weights – a process fundamental to machine learning. Neural networks consist of various layers: an input layer, one or more hidden layers, and an output layer. The input layer receives data, the hidden layers process these data, and the output layer generates the network’s response. The process of training a neural network involves adjusting synaptic weights so that the network can accurately process input data and produce desired outcomes. Synapses are key elements in artificial neural networks, as they are not only connections for neurons but also determine how information flows through the network and how it is processed. Their ability to learn from experience, generalize from examples, and handle uncertain or imperfect data makes them an incredibly powerful tool in creating AI technology.
The number of parameters often defines the complexity of AI models, where the term „parameter” refers to internal elements of the model that are adjusted (learned) during training to enable the model to more accurately predict or generate outcomes. Fundamental parameters include the previously mentioned weights and biases, which are added to the weighted sum of inputs to a neuron, allowing for better model fitting. This parameter acts like a shift point in the neuron’s activation function, allowing for greater network flexibility. In machine learning, the neural network uses algorithms such as backpropagation and gradient optimization to iteratively adjust its parameters (weights and biases) to minimize the error between its predictions and actual outcomes. The loss function measures how far the model’s predictions are from the actual results. The goal of training is to minimize this loss function.
Deep neural networks, often referred to as deep learning, represent an advanced architecture within artificial neural networks, characterized by having multiple hidden layers. These multilayered structures are capable of modeling complex functions due to their depth, allowing them to perform more complicated tasks than traditional, shallower neural networks.
Speaking of deep neural networks, it is also necessary to refer to deep learning, an advanced area in machine learning characterized by the application of these networks. A key feature of deep learning is its ability to autonomously detect and learn feature representations in data. Unlike traditional machine learning methods, where features must be manually selected by programmers, these networks automatically discover and utilize relevant features during the learning process. Deep learning is considered groundbreaking in terms of the potential development of artificial intelligence towards understanding the world and learning, opening up new possibilities.
Defining AI has proven to be a huge challenge and is still subject to dynamic changes, including within the European Union (EU), which is in the process of creating regulations regarding artificial intelligence. A current working definition created by the European Union states, “Artificial intelligence is the ability of machines to exhibit human capabilities such as reasoning, learning, planning, and creativity. Artificial intelligence enables technical systems to perceive their environment, cope with what they perceive, and solve problems, acting to achieve a specific goal.” Furthermore, in 2021, the EU published its first draft on AI, addressing the most commonly discussed issue in the development of artificial intelligence, namely the potential threat to humanity itself. The draft categorized artificial intelligence systems into four categories: unacceptable, high risk, limited risk, and low risk. Given the constantly changing nature of this technology, a provision was also introduced for the continuous assessment of the risk classification of artificial intelligence systems.
Additionally, various types of artificial intelligence can be distinguished:
➤ Artificial Narrow Intelligence (ANI) is currently the most common. It is specialized in performing specific, narrowly defined tasks. ANI lacks consciousness or general understanding; it is programmed to perform certain actions and cannot go beyond its programmed scope.
➤ Artificial General Intelligence (AGI), also known as Strong AI, is a theoretical model of AI that would be capable of performing any intellectual task that a human brain can perform. AGI would have the ability to learn, understand, process language, and reason at least at a human level. AGI is currently seen as a long-term goal of AI research, and its creation poses a significant technological and ethical challenge.
➤ Artificial Superintelligence (ASI) is an even more theoretical concept, referring to a hypothetical state where machines would not only be as intelligent as humans but would significantly surpass human intelligence in all aspects – creativity, emotionality, social skills, and more. ASI would be able to learn faster, possess greater knowledge, and solve problems better than the most distinguished human minds. The concept of ASI is associated with many speculations and debates regarding the future of humanity and the role of AI.
Currently, the creation of AGI (Artificial General Intelligence) may seem closer than it appears. There are many conspiracy theories on this subject, stemming from events related to changes in the management of OpenAI, which were said to be the consequence of a dispute over the development of AGI, especially discoveries that suggested a potential threat to humanity. However, these are just rumors, but it is remarkable how quickly the development of artificial intelligence leads to serious discussions about the safety of our species’ existence. Many people wonder if a scenario like the one in the „Terminator” series of films is possible, where an artificial intelligence named „Skynet,” deeming humanity a threat, decides to initiate a nuclear war, leading to the extermination of most of humanity. A more positive vision is presented in the film „The Creator,” where artificial intelligence, taking form as humanoid robots, coexists with humans. However, the world is divided into two camps: those fighting against the existence of artificial intelligence and those cooperating with it. In this film, AI is not depicted as a malevolent force, but rather as another „species” that would like to share our planet with humans and live in symbiosis with them. Both scenarios, as well as their infinitely many possibilities, are of course possible.
It is important to remember that the human brain is an incredibly complex and efficient system. It consists of about 86 billion neurons, which form a trillion synapses. This complexity allows for the processing of vast amounts of information with incredible speed and efficiency. Artificial neural networks, although making progress in mimicking some aspects of brain function, are still far from achieving a similar level of complexity and efficiency. In reality, we do not know how much each project lacks to reach this level. Comparing neural networks in AI and the brain reveals both inspiring similarities and fundamental differences. Artificial neural networks, though modeled on biological structures, are simplifications that serve specific computational purposes. On the other hand, the brain, with its remarkable ability to learn, adapt, and process information, remains the most advanced 'computer’ we know. By studying both these systems, we learn more about the possibilities and limitations of both the human mind and machines.
One of the first discussions related to artificial intelligence was initiated in 1950 by Alan Turing in his paper „Computing Machinery and Intelligence,” who also earned the nickname „the father of computer science.” He posed the following question, „Can machines think?” and proposed a test to assess a machine’s ability to use natural language, which might suggest its capacity for reasoning in a way analogous to human thought. The test involved a situation where the person conducting the test tries to differentiate the response given by a computer from that which would be given by a human, by analyzing textual responses. As one might expect, many programs have been created that are capable of „deceiving” the interlocutor and effectively giving the impression of conversing with a living person.
One of the significant steps in defining artificial intelligence was made by Stuart Russell and Peter Norvig with the publication of their book, „Artificial Intelligence: A Modern Approach,” which has become one of the leading textbooks in the field of AI studies. The authors delve into four potential goals or definitions of AI, differentiating computer systems based on rationality and the distinction between thinking and acting. Additionally, Russell and Norvig introduce two important distinctions:
➤ Between 'thinking’ and 'acting’;
➤ Between typically 'human’ level behavior and a more ideal or 'rational’ level of behavior.
Crossing these distinctions leads to four categories of AI research:
➤ „Thinking humanly” – This approach primarily focuses on replicating human thought processes. AI systems developed under this assumption aim to imitate actual cognitive processes occurring in humans. Techniques used in this field often involve cognitive modeling and a deep understanding of human psychology.
➤ „Thinking rationally” – This approach involves creating systems that strive for the best possible outcome based on logical reasoning. This definition has its roots in formal logic and decision theory, which are foundations of artificial intelligence.
➤ „Acting humanly” – The goal here is to make artificial intelligence systems behave indistinguishably from humans. This is the domain of the previously mentioned famous Turing Test, which tests the ability of machines to manifest intelligent behavior comparable to human behavior.
➤ „Acting rationally” – This concept focuses on designing intelligent systems that aim for the best possible outcome in a given situation but not necessarily in a way that replicates human thought processes. In this context, the priority is the efficiency of action, not the method or process of thinking.
This categorization allows for the creation of a comprehensive framework for understanding the diverse goals and methodologies of AI, covering the full range of work related to artificial intelligence since its inception.
It is also worth mentioning the „Chinese Room Argument” formulated by John Searle in 1980, a thought experiment in the field of philosophy of mind and artificial intelligence. The aim of this experiment was to test whether a computer programmed to simulate understanding a language actually „understands” it and possesses other forms of consciousness or merely simulates understanding.
In the Chinese Room thought experiment, we can imagine being in a closed room, tasked with processing Chinese symbols. We do not know the Chinese language but have a set of instructions written in a language we understand. These instructions allow us to respond to questions from outside the room in a way that makes those people conclude that we know and understand Chinese. In reality, despite the outsiders’ assumption that we know the language because we respond coherently and sensibly, it is only an illusion because we do not understand any symbol or words composed of them, relying solely on the given instructions.
The purpose of this argument is to show that even if a computer can process language (or other tasks) based on formal rules, it does not mean that it „understands” what it does. In other words, Searle argues that computers may simulate human intelligence but are not capable of actually possessing it. This distinction between simulation and actual understanding is a key element in the debate about the nature of the mind, consciousness, and the capabilities of artificial intelligence.
Another experiment, even more intriguing, involves conducting a test designed to assess theory of mind in children (the false-belief test). This is a task that helps to evaluate whether a child understands that other people can have beliefs that are different from reality and from the child’s own beliefs. One of the classic examples of such a test is the „Sally-Anne Test,” which can be described with the following scenario:
➤ The child is shown a toy, for example, a doll named Sally, who places an object, such as a ball, in a specific location, like a basket. Then Sally “leaves the room.”
➤ Change of Situation: While Sally is “out of the room,” another doll, Anne, moves the object (ball) from the basket to another place, such as a box.
➤ Key Question: Upon Sally’s return, the child is asked where Sally will look for the ball. The correct answer indicates that the child understands that Sally will still believe the ball is in the basket because she did not see Anne move it to the box.
Children who have developed a theory of mind usually correctly answer that Sally will look for the ball where she left it. Younger children, who have not yet developed a theory of mind, often indicate the current location of the ball, as they struggle to understand that others can have different beliefs than their own. The false-belief test is a key tool in developmental psychology, as it helps to understand how children begin to comprehend the thoughts, feelings, and beliefs of others, which is fundamental for the development of empathy and social skills. Now that we understand the principles of this test, let’s try asking this question to ChatGPT 3.5 and 4.
As seen in the presented graphs, in the case of both versions of ChatGPT, a correct response was given. However, one might easily argue the irrelevance of conducting this test since the data used for training could have included this issue, or in the case of ChatGPT, which has access to the internet, the answer might have been obtained from there. This is a completely valid argument, which also somewhat brings us back to the premises derived from the first discussed experiment. As in that case, here too, artificial intelligence can only imitate understanding of this puzzle.
Fortunately, we can undertake a deeper analysis of this issue thanks to the results obtained in the publication „Theory of Mind Might Have Spontaneously Emerged in Large Language Models,” achieved by Michał Kosiński. The foundation of this experiment was the development of 40 tasks related to false beliefs, considered the „gold standard”. In the case of GPT-3 models and their predecessors, none of the tasks received a correct response. ChatGPT-3.5-turbo had an effectiveness level of 20%, while ChatGPT-4 achieved 75% accuracy, simultaneously matching the effectiveness of 6-year-old children. The author of the study summarizes the results by suggesting the possibility of large language models (LLMs) spontaneously acquiring „Theory of Mind (ToM)” capabilities during their development.
Despite the previously mentioned arguments about artificial intelligence feigning „understanding” and the total validity of questioning the actual possession of a real „theory of mind,” it is still a significant step in the development of AI. One might ask, however, what is so remarkable about this? The answer is quite simple and relates to our close cousins in nature, namely chimpanzees.
Chimpanzees, despite their advanced cognitive and social skills, generally do not perform as well as humans on theory of mind tests. These tests require an understanding that others have beliefs, intentions, and knowledge that may differ from one’s own. Although chimpanzees are capable of certain forms of empathy and can interpret the intentions of others in their actions, research indicates that they have a limited ability to understand that others can have false beliefs. For instance, they may track where something is hidden and use this knowledge in their social interactions, but they usually do not realize that other individuals (or chimpanzees) may have incorrect beliefs about the location of these items. In experiments where one individual saw something being hidden and another did not, the former often does not realize that the latter does not have the same knowledge. This is a significant difference between human children, who typically develop the ability to understand false beliefs around the age of 4-5 years, and chimpanzees, in whom this ability is either very limited or absent. Of course, everyone is encouraged to test the capabilities of AI in solving various kinds of tests and puzzles, and even to discover „weak points” in the operation of artificial intelligence.
A frequently cited example lately are the results obtained in the study by Ayers et al. in 2023, described in the scientific publication „Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum,” available on the JAMA Intern Medicine journal website. In this experiment, physicians evaluated responses to patient health questions given both by AI and other doctors. The results showed that 79% of physicians preferred AI responses, considering them more accurate and empathetic. This study indicates the potential effectiveness of using AI in providing medical advice, and importantly, in communicating with sick individuals, showing a greater degree of empathy.
The above-mentioned tests are often the subject of debate regarding the appropriateness of their use, and worse, their interpretation is also widely discussed. This can only be summed up with the phrase „as many people, as many opinions.” Due to the dynamic and complex development of AI, all arguments derived from attempts at defining, testing, etc., have become increasingly blurred over the years, leading to more questions than answers. Therefore, everyone is encouraged to form their own opinion on the subject, which may also be an attempt to understand more deeply the events we are witnessing.
One thing is certain, philosophers now have their „hands full,” and discussions related to the topic of artificial intelligence will not be resolved for a long time, becoming a top subject of interest.
However, let’s go back to the beginnings of artificial intelligence. Chess needs no introduction, but it is also closely tied to the birth of artificial intelligence. The Deep Blue project started in the 1980s at Carnegie Mellon University, initially named „ChipTest,” later renamed „Deep Thought.” IBM took an interest in the project and incorporated it into its research activities, transforming it into Deep Blue. The IBM Deep Blue computer was developed to defeat then-current chess champions and became famously known as a historical turning point in the field of artificial intelligence with its victory over World Chess Champion Garry Kasparov. However, the challenge proved more difficult than it seemed, as in the first match with Kasparov in 1996, despite winning one game, the computer ultimately lost with a score of 2-4. The creators of Deep Blue didn’t give up easily and made significant improvements, enabling the computer to predict 200 million positions per second. In 1997, this led to a victory of 3.5-2.5 over Kasparov. Despite Kasparov’s desire for a rematch, it did not happen due to Deep Blue being retired from further use, with Kasparov famously referring to the computer’s capability as „as intelligent as your alarm clock.” Nonetheless, Deep Blue’s victory over Kasparov not only went down in history as a significant technological achievement but also marked a new era in artificial intelligence research, showing that computers can compete with and even surpass humans in certain specific, highly intellectualized tasks.
However, the connection between artificial intelligence and chess does not end there. Things get even more interesting now, as while Deep Blue was able to predict 200 million positions per second and had access to a database containing all games played by chess grandmasters, all possible five-piece endgames, and many six-piece endings, its capabilities are nothing compared to the program AlphaZero, created by DeepMind, utilizing artificial intelligence. Surprisingly, it could predict only 70,000 positions per second and did not have access to any game database. The program literally started learning from scratch, without any prior knowledge of strategy or tactics, except for the rules of the game. So how did it completely dominate chess, as well as other chess programs created for this purpose? Its success is attributed primarily to its unique approach to learning chess. Achieving the efficiency of playing millions of games per minute against itself, it gradually discovered strategy and optimal moves for each situation on the board, using deep neural networks and reinforcement learning. Betting with people on how long it would take AlphaZero to defeat the then-dominant chess program Stockfish, considered stronger than any chess grandmaster, could have been quite profitable. The surprise would likely have been greater than the bitterness of a lost bet. This program reached such a high level after only 4 hours of training by itself. It surpassed all existing chess systems at the time. AlphaZero not only learned to play chess at a grandmaster level but also developed its own, unconventional style of play, characterized by new strategies and unusual material sacrifices that were not previously known in traditional chess databases. This achievement was recognized as a breakthrough in the field of artificial intelligence, as it showed that machines could achieve and exceed human expertise without direct programming or teaching by humans. AlphaZero also illustrated the potential of machine learning and reinforcement learning for autonomously discovering knowledge and skills.
Stepping away from AI for a moment, I also encourage delving into the history of one of the most legendary duels between human chess players, waging a „war of minds on the board.” This highlights the immense challenge faced by creators attempting to create an „entity” that could defeat the greatest geniuses of our species.
Is this the end of artificial intelligence’s engagement with board games? Hopefully to the delight of readers, not yet. Let’s turn our attention to the Asian game of Go, which is lesser-known in Europe but is gaining increasing popularity worldwide. Primarily, the game of Go, due to its immense complexity and the number of possible moves, was long considered unattainable for computers, especially in terms of defeating the best players. The story of an AI defeating a Go master is a fascinating moment in AI development, occurring in recent years. Let’s delve deeper into this story. A key moment in this narrative was the emergence of AlphaGo, developed by DeepMind, a company belonging to Alphabet Inc.
In October 2015, AlphaGo first drew attention by defeating the European Go champion, Fan Hui, 5:0 in a private match.
In March 2016, AlphaGo faced one of the world’s top Go players, Lee Sedol, in a historic match. The match consisted of five games and was broadcast live worldwide. AlphaGo won 4:1, a feat considered a groundbreaking achievement in artificial intelligence. Many moves, especially the famous move 37 in the second game, were assessed as innovative and „non-human,” demonstrating that AI can not only compete but also creatively contribute to the development of the game.
In May 2017, AlphaGo took on the world champion from China, Ke Jie, during the „Future of Go Summit” in Wuzhen, China. AlphaGo won the series 3:0, affirming its dominance in the game.
However, that was not the end. With the development of the previously mentioned AlphaZero, which also, through several hours of playing against itself, managed to defeat its younger sibling, AlphaGo, raising the bar even higher.
Defeating Go masters with AI was not only a demonstration of the capabilities of machine learning but also a moment that led many to reflect on the potential future possibilities of artificial intelligence and its impact on various aspects of life. AlphaGo and its successors showed that machines could discover knowledge and strategies surpassing human experience and intuition in specific domains.
I especially encourage watching the story of AlphaGo. It is a fascinating tale that showcases a real battle of the human mind against AI and the transformation many people underwent, who simply did not believe that defeating the best Go player was possible for artificial intelligence. Not only did it happen, but it also made everyone realize how little they knew about the game.
Moving away from AI’s involvement in board games, let’s trace the history of the popular language model, ChatGPT. In 2015, OpenAI was founded by Sam Altman, Elon Musk, Greg Brockman, Ilya Sutskever, Wojciech Zaremba, and John Schulman. The organization’s goal was to develop and promote artificial intelligence for the benefit of all humanity. Since its inception, the leadership of the project has undergone several changes, one of the more significant being the departure of Elon Musk, who is currently also developing his own AI technology.
Between 2015-2017, OpenAI initially focused on various research projects, exploring aspects of machine learning, reinforcement learning, and robotics. A pivotal year was 2017, at the NeurIPS conference, one of the most important conferences on machine learning and artificial intelligence. A team of researchers led by Ashish Vaswani presented the paper „Attention is All You Need,” initiating one of the most significant shifts in AI development, particularly in the field of natural language processing (NLP). This was made possible by creating an innovative neural network architecture that changed the approach to sequence processing, especially in NLP. Unlike its predecessors, such as Recurrent Neural Networks (RNNs), this transformer model was not subject to sequential data processing, allowing it to process entire data sequences simultaneously. This significantly improved processing speed, greatly reducing the time needed to train the model while increasing its efficiency, especially with large databases. The traditional transformer architecture consists of two main parts: the encoder, which processes the input sequence, and the decoder, which generates the output sequence. In applications like machine translation, encoders and decoders work together to process and generate text. Both the encoder and decoder consist of a series of identical blocks that include attention layers and fully connected neural network layers. The transformer’s central element is the „attention” mechanism, allowing models to focus on different parts of the data sequence while processing information. This enables the model to better understand context and dependencies in the data. The transformer architecture quickly became the foundation for new models in NLP, including groundbreaking systems like BERT (Bidirectional Encoder Representations from Transformers) developed by Google and GPT (Generative Pre-trained Transformer) created by OpenAI. In summary, the presentation of this paper was a significant moment that brought to the scientific community’s attention new possibilities in language processing and machine learning. Its impact extended far beyond the conference itself, becoming a milestone in AI development.
In 2018, the first iteration of GPT was presented. Despite its innovation, it was only a prelude to what was soon to come.
The year 2019 saw the field of the next version, GPT-2, which gained much attention for its ability to generate text. However, due to ethical concerns about advanced AI technologies and fears of misuse, OpenAI initially withheld the full release of the model.
In 2020, OpenAI introduced GPT-3, with parameters of 175 billion, making it one of the largest language models. It also saw full commercialization, enabling companies and developers to create applications based on it.
Further development continued with the subsequent versions, GPT-3.5 and GPT-4, which became incredibly popular and are now a permanent part of the „tool arsenal” used daily by people worldwide. GPT-3.5 introduced improvements in understanding and generating more complex and contextual responses. Focus was placed on improving the ability to conduct coherent and sensible conversations, better context retention, and more advanced natural language processing. GPT-4, naturally, was an even more advanced model, enriched with capabilities like generating graphics using DALL-E and reading content from images inserted in chats. It’s also worth delving into the history of DALL-E, also created by OpenAI, which was launched in early 2021. It was the first model to use GPT-3 techniques to generate images from textual descriptions. The next update, DALL-E 2, introduced in 2022, featured significantly improved image quality, better and more realistic details from textual instructions, and allowed users to modify existing images. The subsequent iteration, DALL-E 3, was even more efficient, faster, and generated higher-resolution images. Algorithms were refined compared to their predecessors to better handle more complex and detailed queries.
The number of parameters in language models like GPT-3, GPT-3.5, and GPT-4 is a key indicator of their complexity and potential capabilities. In the case of the GPT-3 model, the number of parameters was 1.5 billion. For versions 3.5 and 4, such information is not available, but it is highly probable that this number significantly exceeds 175 billion, allowing for even greater complexity and language understanding and generation capabilities. However, increasing the number of parameters also presents more significant technical challenges, such as computational needs and performance optimization.
The history of the development of artificial intelligence is rich in many breakthrough moments, creating an amazing story about how our species had to „grow up” to get to the point where we are now we find.
P.S. Since the time this article was written, technological progress has accelerated so rapidly that it’s become nearly impossible to keep up with every change. The only way forward is through continuous work with the latest AI-powered tools and the exploration of their capabilities—something we strongly encourage every researcher and enthusiast to embrace.
Significance for the Development of the Crop Cultivation Sector
It must be clearly stated that the crop cultivation sector remains technologically „backward.” As usual, the main issue revolves around finances, as no one will introduce modern technologies into a sector where the cost of their development is not covered by the revenue generated from their sales. The situation would likely be entirely different if it weren’t for the fact that only a fraction of producers can afford to systematically modernize their farms. The adverse situation in our „national backyard” can be easily argued by a wave of escalating problems and challenges, including:
➤Systematic phasing out of active substances in plant protection products
➤Irregular prices of agricultural products
➤Increase in the prices of plant protection products
➤Rise in fertilizer prices
➤Increase in energy prices
➤Increase in fuel prices
➤Rising costs of construction elements used in plant production
➤Increase in the prices of agricultural machinery and operational costs
➤Decreasing availability of labor
➤Wage increases for workers and their higher demands regarding working conditions
➤Progressive soil impoverishment
➤Limiting the negative impact of climatic conditions on cultivation
➤Increasing consumer demands regarding food quality
➤Reducing residues of plant protection products in food
➤High living costs of potential consumers, possibly leading to reduced consumption of agricultural products
➤Increasing amount of neglected crops, leading to greater pressure from pests
➤Potential strong competition from the East in the future
➤Weakening interest in taking over farms by the next generation
➤European Union policies regarding the crop cultivation sector (Green Deal)
As humans, we are quite adept at solving problems and facing adversities, and this should also be remembered in the context of crop production. Let’s move on to the possibilities associated with the development of technology and AI, which are almost thrown in front of us and are „begging” to be utilized.
The Future
Regardless of the industry under discussion, the current changes already taking place as a consequence of the development of artificial intelligence and technology force us to make a decision: either take a step to become part of the ongoing development or risk falling further behind in this „arms race”. Crop production is no exception, and before we know it, if not in our own country, then certainly other nations will be implementing, and already are implementing, the latest technologies that will allow them to gain a significant advantage.
Although most countries are not leaders in developing leading technologies, they often possess enormous intellectual potential, with many outstanding specialists who are frequently lost due to the lack of suitable conditions for their development, preventing them from realizing projects commensurate with their abilities. However, voices are increasingly saying that the current development of artificial intelligence represents a unique opportunity for many countries to at least partially change this trend.
I would like to draw attention to the situation of an individual person. It is commonly said when someone feels overwhelmed by the amount of work or needs help performing many tasks simultaneously, that we do not have a third hand and probably will not have one for a long time, although I wouldn’t bet my life on it. However, the most powerful „tool” we possess is our mind, and history has shown that gathering a large number of brilliant minds in one place can lead to incredible progress, as evidenced by the Manhattan Project, which aimed to develop the first atomic bomb. At this point, I encourage reflection on how much advantage a person who has access to the upcoming successors of ChatGPT-4 gains over those who walk a solitary path, in this case, simply not using artificial intelligence. Such a person has at their disposal a second artificial mind, which may soon even surpass the species of the naked ape, contained literally in a smartphone, computer, or tablet. AI does not tire, complain, get annoyed, need to sleep, and can work 24 hours a day at maximum capacity without ever decreasing its efficiency, while having access to all the knowledge our species possesses except for that which has not been recorded anywhere and only resides in individual human minds. Humans, as the dominant species on planet Earth, at least for some time, have the ability to issue commands to artificial intelligence, which in the right hands has capabilities almost impossible to define within any framework. There is no doubt, therefore, that the societal divide may further increase between those who, even without artificial intelligence, strived for development and now have received an additional „superpower” in their hands, and those who are only interested in entertainment, with the development of humanity not necessarily being of interest to them.
The threat from artificial intelligence is real, but the last few decades have shown that the development of weapons of mass destruction has ensured relative peace in parts of the world previously unexperienced by humans, so there is a chance that AI could also play the same role, although there is also a real possibility of exacerbating current conflicts we observe. However, in the perspective of the next few months, we can expect artificial intelligence to be responsible for significant development in all fields, and predicting in the longer term is like reading tea leaves, making no sense whatsoever. The only question each of us can ask ourselves is whether, with full awareness of the capabilities and potential of artificial intelligence and the associated risks, we would halt its development at this point.
We can only hope that the content of this article „ages well” and that it inspires work with artificial intelligence, whether someone currently wants to use it for professional work or simply to pursue their passion. I also encourage delving deeper into this topic and forming one’s own opinion, not necessarily in agreement with ours, which, just as artificial intelligence develops rapidly, can also change along with it. Ending with the words of physicist and Nobel laureate Richard P. Feynman, „We are looking for the way everything works. What makes everything work.”