Artificial intelligence and technology. A new beginning
Preface
Observing the current development of technology, particularly artificial intelligence, we are witnessing a civilizational leap unprecedented in the history of our species. The pace of this technology’s development is so rapid that the majority of people living on our planet are not yet fully aware of what is truly happening. The progress in developing artificial intelligence clearly suggests that it is an exponential growth, yet due to the dynamic nature of this growth and its presence in numerous fields, terms like „double exponential growth” are used, which serve as a more abstract description to emphasize the complexity of a situation humanity has never encountered before.
Sometimes it may seem that the implementation of modern technologies into the plant cultivation sector is slow and delayed. However, observing the last few years, it can be emphasized that many of these agrotechnologies are waiting in the shadows for the right opportunity to emerge. For many of them, the key solution may be the implementation of broadly defined artificial intelligence, which will support various devices in achieving autonomy in performing specified tasks. Key to the progress in agrotechnology may also be a significant reduction in costs at the creation stage, as artificial intelligence significantly lowers costs on many levels, which is particularly important in this sector, where companies often lack sufficient resources to effectively develop their projects and successfully implement them. In the near future, we can also expect the emergence of a much larger number of startups, which will increase the number of available solutions in the market.
This is undoubtedly an opportunity for the entire community involved in plant cultivation to make another leap towards building modern systems, as well as catching up with other industries in the use of the most advanced solutions.
Due to the complexity, the speed of development, and the encompassing of practically all aspects of our lives, we cannot predict the exact consequences of current events related to artificial intelligence even a year ahead, not to mention the perspective of the next few years. At the same time, continuous attempts are being made to create the first quantum computer in history, which potentially could be another „game-changer” for our species and this time open entirely new possibilities for artificial intelligence as well.
Until recently, current events were only a fantasy in human minds, but they have now become reality, and emerging futuristic films related to artificial intelligence no longer seem like such a distant vision of our world. This can best be summarized with the words, „we live in interesting times.”
Therefore, we are left with nothing but to immerse ourselves in the fascinating world of artificial intelligence and the technologies that are developing alongside it.
Definition and History of Development
Our species, Homo sapiens, has a penchant for defining everything possible, which is undoubtedly understandable, as it is primarily an attempt to understand everything that surrounds us. However, this activity presents us with many problems. The definition of artificial intelligence is no different, and the discussion around it will continue as long as our way of thinking remains unchanged, because that is simply our nature. Current discussions are merely a collection of individual opinions and thoughts, where one might only believe they are right. To support this thesis, consider the definition of time, which, despite having a certain description and assigned characteristics, is still not fully understood. Referring to the topic of plant cultivation, we could mention the attempt to define biostimulants, for which, despite many attempts, every definition raises doubts about its validity. Yet for legal reasons regulating this sector, we must adopt a certain definition. So, I would not get too accustomed to the definitions of artificial intelligence given below, but treat them only as current descriptions that facilitate a better understanding of AI-related issues. Unfortunately, this is just the beginning of the problems associated with this topic, as we can also question and discuss the definition of „intelligence” or „consciousness” themselves, which are fundamental to the discussion on AI. Starting from a somewhat pessimistic approach to this topic, let’s try at the outset to give a definition fitting both intelligence and artificial intelligence, consisting of just three words: „ability to perceive analogies.” Now, we can calmly immerse ourselves in the daunting task of defining artificial intelligence.
The term most commonly heard today is artificial intelligence, for which the abbreviation „AI” is frequently used, derived from the English words „artificial intelligence.” This term is often somewhat overused, used to describe everything related to artificial intelligence, including machine learning (ML), deep learning (DL), neural networks (NN), and deep neural networks (DNN). However, before we define artificial intelligence, it’s essential to define its constituent elements.
Machine learning is currently considered a branch of AI, focused on creating systems trained using datasets that can take the form of not only text but also sound or images. The learning process involves developing algorithms that enable the discovery of patterns in datasets, allowing tasks to be performed without being explicitly programmed to do so. In supervised learning, the model is trained on a dataset with labels, meaning the data includes the correct answers. Unsupervised learning, on the other hand, does not involve labeled data, and the model independently tries to identify patterns and relationships in the data. Reinforcement learning can be compared to training dogs, where the algorithm learns to make decisions by performing certain actions for which it is „rewarded” or „punished,” thus learning to take optimal actions.
Neural networks are often presented as the foundation of modern artificial intelligence, enabling machines to mimic human cognitive and decision-making abilities. Within the context of artificial intelligence, they represent a key concept, drawing inspiration from the structure and functioning of the human brain. Neural networks are a computational model that emulates the way natural neural networks, such as those found in animal brains, process information. The basic unit of a neural network is the neuron, which is a processing element modeled after biological neurons. In living organisms, neurons are specialized cells that transmit information through electrical and chemical signals. Each neuron consists of a cell body, dendrites that receive signals, and an axon that transmits signals further. In artificial neural networks, 'neurons’ are computational units that simulate the function of real neurons. Although significantly simplified compared to their biological counterparts, the fundamental principle of operation – receiving, processing, and transmitting information – remains similar. In artificial neural networks, they receive input signals, process them using a specified mathematical function, and then send an output signal to other neurons. Communication between neurons occurs through connections called synapses, which in artificial models are represented by weights determining the strength of the connection. One of the most fascinating features of the brain is its plasticity, or the ability to change and adapt.
Frank Rosenblatt’s 1958 publication, „The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain,” is a milestone in the history of artificial neural networks. In this work, Rosenblatt presented the perceptron model, an early version of the neural computational network. He described how artificial neural networks could learn and process information, introducing concepts such as weights, activation functions, and learning processes. This work is considered one of the fundamental contributions to the field of neural networks and machine learning. Among more recent publications on the functioning of neural networks, the 2016 book „Deep Learning” by Ian Goodfellow, Yoshua Bengio, and Aaron Courville has gained considerable acclaim. This book is recognized as one of the most important and influential works in the field of deep learning, focusing on presenting the theoretical foundations of neural networks, including deep convolutional and recurrent networks, and their applications in various fields.
Learning in the brain occurs through changes in synaptic strengths, allowing for memory and learning from experiences. In artificial neural networks, learning also involves adjusting the weights of connections, which are the basis of machine learning, but this process is controlled by algorithms and based on the analysis of large data sets. In living beings, synapses are microscopic spaces between neurons through which chemical or electrical signals are transmitted, depending on the type of synapse. This is where the crucial process of learning and memory takes place. In artificial neural networks, 'synapses’ are connections between artificial neurons. While in the brain synapses can strengthen or weaken signals, in artificial neural networks, this is done by changing connection weights – a process fundamental to machine learning. Neural networks consist of various layers: an input layer, one or more hidden layers, and an output layer. The input layer receives data, the hidden layers process these data, and the output layer generates the network’s response. The process of training a neural network involves adjusting synaptic weights so that the network can accurately process input data and produce desired outcomes. Synapses are key elements in artificial neural networks, as they are not only connections for neurons but also determine how information flows through the network and how it is processed. Their ability to learn from experience, generalize from examples, and handle uncertain or imperfect data makes them an incredibly powerful tool in creating AI technology.
The number of parameters often defines the complexity of AI models, where the term „parameter” refers to internal elements of the model that are adjusted (learned) during training to enable the model to more accurately predict or generate outcomes. Fundamental parameters include the previously mentioned weights and biases, which are added to the weighted sum of inputs to a neuron, allowing for better model fitting. This parameter acts like a shift point in the neuron’s activation function, allowing for greater network flexibility. In machine learning, the neural network uses algorithms such as backpropagation and gradient optimization to iteratively adjust its parameters (weights and biases) to minimize the error between its predictions and actual outcomes. The loss function measures how far the model’s predictions are from the actual results. The goal of training is to minimize this loss function.
Deep neural networks, often referred to as deep learning, represent an advanced architecture within artificial neural networks, characterized by having multiple hidden layers. These multilayered structures are capable of modeling complex functions due to their depth, allowing them to perform more complicated tasks than traditional, shallower neural networks.
Speaking of deep neural networks, it is also necessary to refer to deep learning, an advanced area in machine learning characterized by the application of these networks. A key feature of deep learning is its ability to autonomously detect and learn feature representations in data. Unlike traditional machine learning methods, where features must be manually selected by programmers, these networks automatically discover and utilize relevant features during the learning process. Deep learning is considered groundbreaking in terms of the potential development of artificial intelligence towards understanding the world and learning, opening up new possibilities.
Defining AI has proven to be a huge challenge and is still subject to dynamic changes, including within the European Union (EU), which is in the process of creating regulations regarding artificial intelligence. A current working definition created by the European Union states, “Artificial intelligence is the ability of machines to exhibit human capabilities such as reasoning, learning, planning, and creativity. Artificial intelligence enables technical systems to perceive their environment, cope with what they perceive, and solve problems, acting to achieve a specific goal.” Furthermore, in 2021, the EU published its first draft on AI, addressing the most commonly discussed issue in the development of artificial intelligence, namely the potential threat to humanity itself. The draft categorized artificial intelligence systems into four categories: unacceptable, high risk, limited risk, and low risk. Given the constantly changing nature of this technology, a provision was also introduced for the continuous assessment of the risk classification of artificial intelligence systems.
Additionally, various types of artificial intelligence can be distinguished:
➤ Artificial Narrow Intelligence (ANI) is currently the most common. It is specialized in performing specific, narrowly defined tasks. ANI lacks consciousness or general understanding; it is programmed to perform certain actions and cannot go beyond its programmed scope.
➤ Artificial General Intelligence (AGI), also known as Strong AI, is a theoretical model of AI that would be capable of performing any intellectual task that a human brain can perform. AGI would have the ability to learn, understand, process language, and reason at least at a human level. AGI is currently seen as a long-term goal of AI research, and its creation poses a significant technological and ethical challenge.
➤ Artificial Superintelligence (ASI) is an even more theoretical concept, referring to a hypothetical state where machines would not only be as intelligent as humans but would significantly surpass human intelligence in all aspects – creativity, emotionality, social skills, and more. ASI would be able to learn faster, possess greater knowledge, and solve problems better than the most distinguished human minds. The concept of ASI is associated with many speculations and debates regarding the future of humanity and the role of AI.
Currently, the creation of AGI (Artificial General Intelligence) may seem closer than it appears. There are many conspiracy theories on this subject, stemming from events related to changes in the management of OpenAI, which were said to be the consequence of a dispute over the development of AGI, especially discoveries that suggested a potential threat to humanity. However, these are just rumors, but it is remarkable how quickly the development of artificial intelligence leads to serious discussions about the safety of our species’ existence. Many people wonder if a scenario like the one in the „Terminator” series of films is possible, where an artificial intelligence named „Skynet,” deeming humanity a threat, decides to initiate a nuclear war, leading to the extermination of most of humanity. A more positive vision is presented in the film „The Creator,” where artificial intelligence, taking form as humanoid robots, coexists with humans. However, the world is divided into two camps: those fighting against the existence of artificial intelligence and those cooperating with it. In this film, AI is not depicted as a malevolent force, but rather as another „species” that would like to share our planet with humans and live in symbiosis with them. Both scenarios, as well as their infinitely many possibilities, are of course possible.
It is important to remember that the human brain is an incredibly complex and efficient system. It consists of about 86 billion neurons, which form a trillion synapses. This complexity allows for the processing of vast amounts of information with incredible speed and efficiency. Artificial neural networks, although making progress in mimicking some aspects of brain function, are still far from achieving a similar level of complexity and efficiency. In reality, we do not know how much each project lacks to reach this level. Comparing neural networks in AI and the brain reveals both inspiring similarities and fundamental differences. Artificial neural networks, though modeled on biological structures, are simplifications that serve specific computational purposes. On the other hand, the brain, with its remarkable ability to learn, adapt, and process information, remains the most advanced 'computer’ we know. By studying both these systems, we learn more about the possibilities and limitations of both the human mind and machines.
One of the first discussions related to artificial intelligence was initiated in 1950 by Alan Turing in his paper „Computing Machinery and Intelligence,” who also earned the nickname „the father of computer science.” He posed the following question, „Can machines think?” and proposed a test to assess a machine’s ability to use natural language, which might suggest its capacity for reasoning in a way analogous to human thought. The test involved a situation where the person conducting the test tries to differentiate the response given by a computer from that which would be given by a human, by analyzing textual responses. As one might expect, many programs have been created that are capable of „deceiving” the interlocutor and effectively giving the impression of conversing with a living person.
One of the significant steps in defining artificial intelligence was made by Stuart Russell and Peter Norvig with the publication of their book, „Artificial Intelligence: A Modern Approach,” which has become one of the leading textbooks in the field of AI studies. The authors delve into four potential goals or definitions of AI, differentiating computer systems based on rationality and the distinction between thinking and acting. Additionally, Russell and Norvig introduce two important distinctions:
➤ Between 'thinking’ and 'acting’;
➤ Between typically 'human’ level behavior and a more ideal or 'rational’ level of behavior.
Crossing these distinctions leads to four categories of AI research:
➤ „Thinking humanly” – This approach primarily focuses on replicating human thought processes. AI systems developed under this assumption aim to imitate actual cognitive processes occurring in humans. Techniques used in this field often involve cognitive modeling and a deep understanding of human psychology.
➤ „Thinking rationally” – This approach involves creating systems that strive for the best possible outcome based on logical reasoning. This definition has its roots in formal logic and decision theory, which are foundations of artificial intelligence.
➤ „Acting humanly” – The goal here is to make artificial intelligence systems behave indistinguishably from humans. This is the domain of the previously mentioned famous Turing Test, which tests the ability of machines to manifest intelligent behavior comparable to human behavior.
➤ „Acting rationally” – This concept focuses on designing intelligent systems that aim for the best possible outcome in a given situation but not necessarily in a way that replicates human thought processes. In this context, the priority is the efficiency of action, not the method or process of thinking.
This categorization allows for the creation of a comprehensive framework for understanding the diverse goals and methodologies of AI, covering the full range of work related to artificial intelligence since its inception.
It is also worth mentioning the „Chinese Room Argument” formulated by John Searle in 1980, a thought experiment in the field of philosophy of mind and artificial intelligence. The aim of this experiment was to test whether a computer programmed to simulate understanding a language actually „understands” it and possesses other forms of consciousness or merely simulates understanding.
In the Chinese Room thought experiment, we can imagine being in a closed room, tasked with processing Chinese symbols. We do not know the Chinese language but have a set of instructions written in a language we understand. These instructions allow us to respond to questions from outside the room in a way that makes those people conclude that we know and understand Chinese. In reality, despite the outsiders’ assumption that we know the language because we respond coherently and sensibly, it is only an illusion because we do not understand any symbol or words composed of them, relying solely on the given instructions.
The purpose of this argument is to show that even if a computer can process language (or other tasks) based on formal rules, it does not mean that it „understands” what it does. In other words, Searle argues that computers may simulate human intelligence but are not capable of actually possessing it. This distinction between simulation and actual understanding is a key element in the debate about the nature of the mind, consciousness, and the capabilities of artificial intelligence.
Another experiment, even more intriguing, involves conducting a test designed to assess theory of mind in children (the false-belief test). This is a task that helps to evaluate whether a child understands that other people can have beliefs that are different from reality and from the child’s own beliefs. One of the classic examples of such a test is the „Sally-Anne Test,” which can be described with the following scenario:
➤ The child is shown a toy, for example, a doll named Sally, who places an object, such as a ball, in a specific location, like a basket. Then Sally “leaves the room.”
➤ Change of Situation: While Sally is “out of the room,” another doll, Anne, moves the object (ball) from the basket to another place, such as a box.
➤ Key Question: Upon Sally’s return, the child is asked where Sally will look for the ball. The correct answer indicates that the child understands that Sally will still believe the ball is in the basket because she did not see Anne move it to the box.
Children who have developed a theory of mind usually correctly answer that Sally will look for the ball where she left it. Younger children, who have not yet developed a theory of mind, often indicate the current location of the ball, as they struggle to understand that others can have different beliefs than their own. The false-belief test is a key tool in developmental psychology, as it helps to understand how children begin to comprehend the thoughts, feelings, and beliefs of others, which is fundamental for the development of empathy and social skills. Now that we understand the principles of this test, let’s try asking this question to ChatGPT 3.5 and 4.
As seen in the presented graphs, in the case of both versions of ChatGPT, a correct response was given. However, one might easily argue the irrelevance of conducting this test since the data used for training could have included this issue, or in the case of ChatGPT, which has access to the internet, the answer might have been obtained from there. This is a completely valid argument, which also somewhat brings us back to the premises derived from the first discussed experiment. As in that case, here too, artificial intelligence can only imitate understanding of this puzzle.
Fortunately, we can undertake a deeper analysis of this issue thanks to the results obtained in the publication „Theory of Mind Might Have Spontaneously Emerged in Large Language Models,” achieved by Michał Kosiński. The foundation of this experiment was the development of 40 tasks related to false beliefs, considered the „gold standard”. In the case of GPT-3 models and their predecessors, none of the tasks received a correct response. ChatGPT-3.5-turbo had an effectiveness level of 20%, while ChatGPT-4 achieved 75% accuracy, simultaneously matching the effectiveness of 6-year-old children. The author of the study summarizes the results by suggesting the possibility of large language models (LLMs) spontaneously acquiring „Theory of Mind (ToM)” capabilities during their development.
Despite the previously mentioned arguments about artificial intelligence feigning „understanding” and the total validity of questioning the actual possession of a real „theory of mind,” it is still a significant step in the development of AI. One might ask, however, what is so remarkable about this? The answer is quite simple and relates to our close cousins in nature, namely chimpanzees.
Chimpanzees, despite their advanced cognitive and social skills, generally do not perform as well as humans on theory of mind tests. These tests require an understanding that others have beliefs, intentions, and knowledge that may differ from one’s own. Although chimpanzees are capable of certain forms of empathy and can interpret the intentions of others in their actions, research indicates that they have a limited ability to understand that others can have false beliefs. For instance, they may track where something is hidden and use this knowledge in their social interactions, but they usually do not realize that other individuals (or chimpanzees) may have incorrect beliefs about the location of these items. In experiments where one individual saw something being hidden and another did not, the former often does not realize that the latter does not have the same knowledge. This is a significant difference between human children, who typically develop the ability to understand false beliefs around the age of 4-5 years, and chimpanzees, in whom this ability is either very limited or absent. Of course, everyone is encouraged to test the capabilities of AI in solving various kinds of tests and puzzles, and even to discover „weak points” in the operation of artificial intelligence.
A frequently cited example lately are the results obtained in the study by Ayers et al. in 2023, described in the scientific publication „Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum,” available on the JAMA Intern Medicine journal website. In this experiment, physicians evaluated responses to patient health questions given both by AI and other doctors. The results showed that 79% of physicians preferred AI responses, considering them more accurate and empathetic. This study indicates the potential effectiveness of using AI in providing medical advice, and importantly, in communicating with sick individuals, showing a greater degree of empathy.
The above-mentioned tests are often the subject of debate regarding the appropriateness of their use, and worse, their interpretation is also widely discussed. This can only be summed up with the phrase „as many people, as many opinions.” Due to the dynamic and complex development of AI, all arguments derived from attempts at defining, testing, etc., have become increasingly blurred over the years, leading to more questions than answers. Therefore, everyone is encouraged to form their own opinion on the subject, which may also be an attempt to understand more deeply the events we are witnessing.
One thing is certain, philosophers now have their „hands full,” and discussions related to the topic of artificial intelligence will not be resolved for a long time, becoming a top subject of interest.
However, let’s go back to the beginnings of artificial intelligence. Chess needs no introduction, but it is also closely tied to the birth of artificial intelligence. The Deep Blue project started in the 1980s at Carnegie Mellon University, initially named „ChipTest,” later renamed „Deep Thought.” IBM took an interest in the project and incorporated it into its research activities, transforming it into Deep Blue. The IBM Deep Blue computer was developed to defeat then-current chess champions and became famously known as a historical turning point in the field of artificial intelligence with its victory over World Chess Champion Garry Kasparov. However, the challenge proved more difficult than it seemed, as in the first match with Kasparov in 1996, despite winning one game, the computer ultimately lost with a score of 2-4. The creators of Deep Blue didn’t give up easily and made significant improvements, enabling the computer to predict 200 million positions per second. In 1997, this led to a victory of 3.5-2.5 over Kasparov. Despite Kasparov’s desire for a rematch, it did not happen due to Deep Blue being retired from further use, with Kasparov famously referring to the computer’s capability as „as intelligent as your alarm clock.” Nonetheless, Deep Blue’s victory over Kasparov not only went down in history as a significant technological achievement but also marked a new era in artificial intelligence research, showing that computers can compete with and even surpass humans in certain specific, highly intellectualized tasks.
However, the connection between artificial intelligence and chess does not end there. Things get even more interesting now, as while Deep Blue was able to predict 200 million positions per second and had access to a database containing all games played by chess grandmasters, all possible five-piece endgames, and many six-piece endings, its capabilities are nothing compared to the program AlphaZero, created by DeepMind, utilizing artificial intelligence. Surprisingly, it could predict only 70,000 positions per second and did not have access to any game database. The program literally started learning from scratch, without any prior knowledge of strategy or tactics, except for the rules of the game. So how did it completely dominate chess, as well as other chess programs created for this purpose? Its success is attributed primarily to its unique approach to learning chess. Achieving the efficiency of playing millions of games per minute against itself, it gradually discovered strategy and optimal moves for each situation on the board, using deep neural networks and reinforcement learning. Betting with people on how long it would take AlphaZero to defeat the then-dominant chess program Stockfish, considered stronger than any chess grandmaster, could have been quite profitable. The surprise would likely have been greater than the bitterness of a lost bet. This program reached such a high level after only 4 hours of training by itself. It surpassed all existing chess systems at the time. AlphaZero not only learned to play chess at a grandmaster level but also developed its own, unconventional style of play, characterized by new strategies and unusual material sacrifices that were not previously known in traditional chess databases. This achievement was recognized as a breakthrough in the field of artificial intelligence, as it showed that machines could achieve and exceed human expertise without direct programming or teaching by humans. AlphaZero also illustrated the potential of machine learning and reinforcement learning for autonomously discovering knowledge and skills.
Stepping away from AI for a moment, I also encourage delving into the history of one of the most legendary duels between human chess players, waging a „war of minds on the board.” This highlights the immense challenge faced by creators attempting to create an „entity” that could defeat the greatest geniuses of our species.
Is this the end of artificial intelligence’s engagement with board games? Hopefully to the delight of readers, not yet. Let’s turn our attention to the Asian game of Go, which is lesser-known in Europe but is gaining increasing popularity worldwide. Primarily, the game of Go, due to its immense complexity and the number of possible moves, was long considered unattainable for computers, especially in terms of defeating the best players. The story of an AI defeating a Go master is a fascinating moment in AI development, occurring in recent years. Let’s delve deeper into this story. A key moment in this narrative was the emergence of AlphaGo, developed by DeepMind, a company belonging to Alphabet Inc.
In October 2015, AlphaGo first drew attention by defeating the European Go champion, Fan Hui, 5:0 in a private match.
In March 2016, AlphaGo faced one of the world’s top Go players, Lee Sedol, in a historic match. The match consisted of five games and was broadcast live worldwide. AlphaGo won 4:1, a feat considered a groundbreaking achievement in artificial intelligence. Many moves, especially the famous move 37 in the second game, were assessed as innovative and „non-human,” demonstrating that AI can not only compete but also creatively contribute to the development of the game.
In May 2017, AlphaGo took on the world champion from China, Ke Jie, during the „Future of Go Summit” in Wuzhen, China. AlphaGo won the series 3:0, affirming its dominance in the game.
However, that was not the end. With the development of the previously mentioned AlphaZero, which also, through several hours of playing against itself, managed to defeat its younger sibling, AlphaGo, raising the bar even higher.
Defeating Go masters with AI was not only a demonstration of the capabilities of machine learning but also a moment that led many to reflect on the potential future possibilities of artificial intelligence and its impact on various aspects of life. AlphaGo and its successors showed that machines could discover knowledge and strategies surpassing human experience and intuition in specific domains.
I especially encourage watching the story of AlphaGo. It is a fascinating tale that showcases a real battle of the human mind against AI and the transformation many people underwent, who simply did not believe that defeating the best Go player was possible for artificial intelligence. Not only did it happen, but it also made everyone realize how little they knew about the game.
Moving away from AI’s involvement in board games, let’s trace the history of the popular language model, ChatGPT. In 2015, OpenAI was founded by Sam Altman, Elon Musk, Greg Brockman, Ilya Sutskever, Wojciech Zaremba, and John Schulman. The organization’s goal was to develop and promote artificial intelligence for the benefit of all humanity. Since its inception, the leadership of the project has undergone several changes, one of the more significant being the departure of Elon Musk, who is currently also developing his own AI technology.
Between 2015-2017, OpenAI initially focused on various research projects, exploring aspects of machine learning, reinforcement learning, and robotics. A pivotal year was 2017, at the NeurIPS conference, one of the most important conferences on machine learning and artificial intelligence. A team of researchers led by Ashish Vaswani presented the paper „Attention is All You Need,” initiating one of the most significant shifts in AI development, particularly in the field of natural language processing (NLP). This was made possible by creating an innovative neural network architecture that changed the approach to sequence processing, especially in NLP. Unlike its predecessors, such as Recurrent Neural Networks (RNNs), this transformer model was not subject to sequential data processing, allowing it to process entire data sequences simultaneously. This significantly improved processing speed, greatly reducing the time needed to train the model while increasing its efficiency, especially with large databases. The traditional transformer architecture consists of two main parts: the encoder, which processes the input sequence, and the decoder, which generates the output sequence. In applications like machine translation, encoders and decoders work together to process and generate text. Both the encoder and decoder consist of a series of identical blocks that include attention layers and fully connected neural network layers. The transformer’s central element is the „attention” mechanism, allowing models to focus on different parts of the data sequence while processing information. This enables the model to better understand context and dependencies in the data. The transformer architecture quickly became the foundation for new models in NLP, including groundbreaking systems like BERT (Bidirectional Encoder Representations from Transformers) developed by Google and GPT (Generative Pre-trained Transformer) created by OpenAI. In summary, the presentation of this paper was a significant moment that brought to the scientific community’s attention new possibilities in language processing and machine learning. Its impact extended far beyond the conference itself, becoming a milestone in AI development.
In 2018, the first iteration of GPT was presented. Despite its innovation, it was only a prelude to what was soon to come.
The year 2019 saw the field of the next version, GPT-2, which gained much attention for its ability to generate text. However, due to ethical concerns about advanced AI technologies and fears of misuse, OpenAI initially withheld the full release of the model.
In 2020, OpenAI introduced GPT-3, with parameters of 175 billion, making it one of the largest language models. It also saw full commercialization, enabling companies and developers to create applications based on it.
Further development continued with the subsequent versions, GPT-3.5 and GPT-4, which became incredibly popular and are now a permanent part of the „tool arsenal” used daily by people worldwide. GPT-3.5 introduced improvements in understanding and generating more complex and contextual responses. Focus was placed on improving the ability to conduct coherent and sensible conversations, better context retention, and more advanced natural language processing. GPT-4, naturally, was an even more advanced model, enriched with capabilities like generating graphics using DALL-E and reading content from images inserted in chats. It’s also worth delving into the history of DALL-E, also created by OpenAI, which was launched in early 2021. It was the first model to use GPT-3 techniques to generate images from textual descriptions. The next update, DALL-E 2, introduced in 2022, featured significantly improved image quality, better and more realistic details from textual instructions, and allowed users to modify existing images. The subsequent iteration, DALL-E 3, was even more efficient, faster, and generated higher-resolution images. Algorithms were refined compared to their predecessors to better handle more complex and detailed queries.
The number of parameters in language models like GPT-3, GPT-3.5, and GPT-4 is a key indicator of their complexity and potential capabilities. In the case of the GPT-3 model, the number of parameters was 1.5 billion. For versions 3.5 and 4, such information is not available, but it is highly probable that this number significantly exceeds 175 billion, allowing for even greater complexity and language understanding and generation capabilities. However, increasing the number of parameters also presents more significant technical challenges, such as computational needs and performance optimization.
Continuing with AI-generated images, let’s also look at other examples. The first model to mention is Bing Image Creator, an image generation tool developed by Microsoft, part of the company’s larger strategy in artificial intelligence and machine learning. Microsoft’s AI efforts go back many years, encompassing the development of technologies like Cortana, their voice assistant, and numerous projects in machine learning and natural language processing. Developing their Bing search engine, Microsoft focused on integrating advanced AI technologies to improve user experiences in searching and browsing content. This led to experiments with various forms of AI, including image generation. The development of image generation tools by other companies, such as OpenAI’s DALL-E, likely motivated Microsoft to accelerate its own solutions in this field. With extensive AI experience and vast resources, Microsoft was able to quickly respond to these trends. Ultimately, Bing Image Creator was announced as a tool that allows users to generate images based on textual descriptions. Integration with Bing enabled easy access to the tool through the search engine, offering a unique combination of image search and generation.
Microsoft also developed its own AI technology similar to OpenAI’s ChatGPT. This technology, known as Turing Natural Language Generation (T-NLG), was announced by Microsoft in February 2020. T-NLG is one of the largest language models developed by Microsoft, named after Alan Turing, a pioneer in computer science. At the time of its announcement, T-NLG was one of the largest language models, containing an impressive 17 billion parameters, enabling more complex and coherent text generation. The language model developed by Microsoft, which was commercially released, is called „Azure OpenAI Service”. This service provider offers access to advanced language models, including OpenAI’s, through the Microsoft Azure cloud platform. This platform allows companies and developers to integrate advanced AI features.
At this point, many readers may have noticed that Microsoft also provides access to GPT and wondered about the connection between the two entities. The explanation is quite simple: developing and training large AI models, like GPT-3, requires significant computational power and infrastructure. OpenAI needed a partner that could provide the necessary resources. In 2019, Microsoft announced a $1 billion investment in OpenAI. This investment was a strategic move to support AI research and development. As part of the collaboration, OpenAI decided to use the Microsoft Azure cloud platform to train its AI models. Azure not only offered the necessary computational power but also advanced tools and services key to AI development. This alliance was primarily a response to the growing global interest in AI and the need to deliver more advanced and scalable solutions in this area. Microsoft seized this opportunity, as a global tech giant, to keep up with the competition and maintain its pace. This collaboration is an example of how large tech corporations can work with innovative AI companies to accelerate the development and implementation of new technologies. It also sets a direction for how future initiatives and partnerships in AI might evolve, where one entity with the right technology and „know-how” but lacking sufficient resources partners with another that can provide what is needed.
Another noteworthy project generating images from textual instructions is the Midjourney project, created by Midjourney, Inc. Images generated by this model are considered public domain and are not subject to copyright protection. It was quickly adopted by various industries, including advertising and architecture, for its ability to rapidly generate original content and visualizations. However, the tool has faced criticism from some artists who argue that it uses their original creative works in its training set. Despite this, Midjourney has a DMCA policy for removing content upon artists’ requests.
This is not the end, but just the beginning of what we can expect in the near future. Currently, Adobe Inc. is developing a new feature for Adobe Photoshop called „Generative Fill,” supported by Adobe Firefly. This feature allows users to add and remove content from images non-destructively, using simple textual commands. Generative Fill enables quick and easy creation of photorealistic or surrealistic ideas, saving users time in content creation. Generative Fill is an exciting step in integrating AI with creative tools.
Of course, this is only a glimpse of the AI tools available for image generation. Tools like LeonardoAI and Google’s DeepDream have also gained significant popularity.
Google, as one of the leaders in innovation, has a long history of research and development in AI, and their contributions include a wide range of technologies, from machine learning algorithms to the development of advanced artificial intelligence systems. Google began focusing on AI relatively early. One of the first significant steps was the acquisition of DeepMind in 2014.
Another key element in Google’s AI development is the Google Brain project, started in 2011. Google Brain focuses on deep learning, a subfield of machine learning inspired by the human brain’s workings. Google Brain created TensorFlow, an open-source machine learning platform that has become the industry standard for AI researchers and developers.
Google also develops AI technologies in more consumer-focused applications, such as Google Assistant, which uses advanced speech recognition algorithms and natural language processing to enable user interactions with devices through voice. This demonstrates how AI technologies are integrated into everyday products.
In recent years, Google has continued to innovate in the AI field, introducing new initiatives like Google Gemini and Bard. Gemini is the internal code name for Google’s efforts in generative AI models aimed at creating more advanced and understandable user interactions by generating text, images, and other forms of content.
Bard, announced in early 2023, is Google’s direct response to the popularity of OpenAI’s ChatGPT. It aims to provide users with a tool for generating content that can aid in education, creation, and solving everyday problems, utilizing advanced AI technologies, including machine learning techniques and natural language processing.
The history of the development of artificial intelligence is rich in many breakthrough moments, creating an amazing story about how our species had to „grow up” to get to the point where we are now we find.
Significance for the Development of the Crop Cultivation Sector
It must be clearly stated that the crop cultivation sector remains technologically „backward.” As usual, the main issue revolves around finances, as no one will introduce modern technologies into a sector where the cost of their development is not covered by the revenue generated from their sales. The situation would likely be entirely different if it weren’t for the fact that only a fraction of producers can afford to systematically modernize their farms. The adverse situation in our „national backyard” can be easily argued by a wave of escalating problems and challenges, including:
➤Systematic phasing out of active substances in plant protection products
➤Irregular prices of agricultural products
➤Increase in the prices of plant protection products
➤Rise in fertilizer prices
➤Increase in energy prices
➤Increase in fuel prices
➤Rising costs of construction elements used in plant production
➤Increase in the prices of agricultural machinery and operational costs
➤Decreasing availability of labor
➤Wage increases for workers and their higher demands regarding working conditions
➤Progressive soil impoverishment
➤Limiting the negative impact of climatic conditions on cultivation
➤Increasing consumer demands regarding food quality
➤Reducing residues of plant protection products in food
➤High living costs of potential consumers, possibly leading to reduced consumption of agricultural products
➤Increasing amount of neglected crops, leading to greater pressure from pests
➤Potential strong competition from the East in the future
➤Weakening interest in taking over farms by the next generation
➤European Union policies regarding the crop cultivation sector (Green Deal)
As humans, we are quite adept at solving problems and facing adversities, and this should also be remembered in the context of crop production. Let’s move on to the possibilities associated with the development of technology and AI, which are almost thrown in front of us and are „begging” to be utilized.
IoT Devices
In the context of crop production, IoT (Internet of Things) devices refer to integrated systems and digital devices used for monitoring, controlling, and optimizing processes related to plant cultivation. These smart devices collect data from various sources and use it to automate and improve crop production processes, such as irrigation, fertilization, pest control, and yield forecasting. The term IoT originates from the English phrase „Internet of Things.” Examples of these devices include soil sensors, irrigation systems, drones, satellites, mobile apps, climate sensors, robots and autonomous vehicles, early warning systems for various threats, cameras and imaging sensors, communication networks, and platforms associated with IoT.
Currently, we are witnessing increasingly intensive implementation of technologies that fall under the IoT framework. These devices, in line with their purpose, often collect vast amounts of data, necessitating their appropriate real-time analysis for full utilization. Although manufacturers of these technologies often also provide the relevant software for data analysis, it is important to emphasize that the current capabilities of such software are just a fraction of the potential offered by software enhanced with AI. In the near future, we can expect to see comprehensive modernizations of existing programs, as well as the creation of entirely new ones capable of combining and finding analogies in data from a much broader spectrum of sensors that collect these data. This opens up entirely new possibilities in the context of making optimal decisions dependent on a range of factors. Bearing in mind that a modern farm should be run like a large company to achieve good financial results, it is essential to seek savings, which can also come from making optimal decisions. By reducing errors, we avoid triggering a chain reaction whose consequence is incurring additional costs to rectify our wrong decisions. Combining the practical knowledge of producers with data “provided” in the right form by AI could be key to achieving a competitive level relative to other producers. Producers who achieve higher yields of better quality while reducing costs will naturally develop faster, dominating the market and reducing the profitability of farms that lag behind. Let’s take a closer look at individual technologies.
We will likely observe changes most rapidly in typical agricultural crops, where there is a maximum degree of mechanization, and in the most advanced soilless cultivation systems. In these systems, thanks to a large number of sensors and sensors, incredible precision can be achieved, provided that we can effectively utilize these data.
Soilless Plant Cultivation, a method of growing plants without soil, encompasses several main soilless cultivation systems:
Hydroponic Systems:
➤ Nutrient Film Technique (NFT): This hydroponic method involves plants placed in special troughs or channels through which a thin layer of nutrient solution, usually a few millimeters deep, flows. This layer provides plants with nutrients, water, and oxygen. Plants are typically placed in the troughs using small baskets or holders without traditional growing media. The nutrient solution is continuously pumped through the system, with excess collected at the end of the trough and reused.
➤ Flood and Drain System (Ebb and Flow): Characterized by periodically flooding plant roots with nutrient solution. Plants are placed in containers or trays that are periodically flooded with the nutrient solution. After flooding, the solution drains, providing roots with air access. The duration and frequency of flooding are controlled by a timer system, allowing for precise adjustment to the needs of specific plants.
Often, a lightweight growing medium like perlite is used to support plants and aid moisture retention.
➤ Deep Flow Technique (DFT): Similar to NFT but differs in the depth of the nutrient solution. In DFT, the nutrient solution is deeper, often several centimeters, allowing plant roots to be fully submerged. Like NFT, the nutrient solution is continuously pumped through the system. With a larger volume of nutrient solution, DFT is more resilient to fluctuations in nutrient supply.
➤ Aeroponics: An advanced form of hydroponics. In an aeroponic system, plant roots are suspended in the air and regularly sprayed with a nutrient solution mist. This method provides optimal access to oxygen and nutrients, often resulting in faster growth and higher crop yields.
➤ Aquaponics: A unique combination of aquaculture (fish farming) and hydroponics. In aquaponics, waste produced by fish is converted by bacteria into nutrients for plants. Plants, in turn, purify the water, which can be reused in the fish farming system. It’s a closed-loop system that is sustainable and environmentally friendly.
Solid Media-Based Systems:
➤ Vermiculite, Perlite, Rockwool: Plants are grown in these materials, which provide mechanical support and retain water and nutrients.
➤ Coconut Coir, Peat, Brown Coal: These organic carriers can also be used in soilless cultivation.
Soilless cultivation systems based on solid carriers require appropriate containers or structures to hold the carrier and plants.
➤ Pots and Containers
➤ Grow Bags
➤ Troughs and Channels
➤ Grow Plates
➤ Modular Containers
➤ Hanging Containers
Given the dynamic development of technology, which also increases energy demands, it’s pertinent to mention solutions currently being implemented and developed to partially address this issue. One of the most interesting technologies being researched today is „agrivoltaics,” a combination of agriculture („agri”) and solar energy production („voltaics”).
In agrivoltaic systems, photovoltaic panels are installed on agricultural lands, but in a way that does not interfere with plant cultivation or animal husbandry. The aim of agrivoltaics is the dual use of the same lands for both food production and renewable energy generation.
There are various approaches to designing and implementing agrivoltaic systems. In some cases, solar panels are placed high above the ground, creating a structure similar to a pergola under which traditional agricultural activities can take place. In other cases, panels are arranged to provide shade for crops or animals, which can be particularly beneficial in hot climates.
The benefits of agrivoltaics include:
➤ More efficient land use: Generating both energy and food on the same lands can help mitigate the conflict between the need for land for agricultural purposes and the development of energy infrastructure.
➤ Plant protection: In some cases, panels can protect plants from extreme weather conditions such as heavy rain or intense sunlight.
➤ Reduced water usage: The shade cast by the panels can reduce the rate of water evaporation from the soil, which is beneficial in drought-prone regions.
➤ Diversified income for farmers: By selling the solar energy produced, farmers can generate an additional source of income.
Agrivoltaics represents an innovative solution in the context of sustainable development and can contribute to achieving goals related to environmental protection and efficient use of natural resources.
Biophotovoltaics, which utilize natural photosynthesis processes, are also inspiring. Research into the use of plant photosynthesis for the direct production of electrical energy opens fascinating prospects for future energy technologies. AI can facilitate the integration of energy produced from plants with the existing energy infrastructure, managing distribution and storage of energy. AI systems can predict patterns of energy demand and automatically adjust renewable energy production, ensuring stable and efficient supplies.
Energy crops and agricultural waste are transformed into biomass, which is used for the production of thermal and electrical energy. Fast-growing grasses, such as miscanthus, sugarcane, oil plants, and even remnants from plant production, find their second life as a valuable source of renewable energy. The fermentation of plant residues to produce biogas is also possible. Artificial Intelligence (AI) can monitor and optimize fermentation processes in biogas production, predicting optimal conditions for processing plant residues and other organic materials. Through real-time data analysis, AI systems can adjust process parameters, such as temperature and humidity, to maximize biogas production while reducing emissions of harmful gases. Furthermore, bioethanol and biodiesel, derived from sugary, starchy, and oily plants, present sustainable alternatives to fossil fuels. They offer a reduction in dependency on petroleum and a decrease in harmful gas emissions. In the production of bioethanol and biodiesel, AI can assist in identifying and selecting the most efficient raw materials and optimizing chemical and fermentation processes. Artificial intelligence algorithms can also predict demand for biofuels and manage supply chains, minimizing losses and increasing profitability.
A key challenge is to find models of collaboration that maximize benefits for the agricultural sector, energy sector, and the environment. Pursuing synergy between crop production and energy generation is not only possible but essential for ensuring the food and energy security of our planet. The role of artificial intelligence in the synergy between crop production and the energy sector is multifaceted and growing with technological progress. AI not only enhances the efficiency and sustainability of these sectors but also opens new avenues for innovation and green energy transformation. Implementing AI in agriculture and energy production can significantly contribute to achieving global goals related to climate protection and sustainable development.
Drones
In recent years, drones have experienced a tremendous leap in both development and commercialization, gaining popularity worldwide. They certainly deserve a closer look, so let’s delve into the history of their development, exemplified by the brand DJI (Da-Jiang Innovations), which has forever changed the world of drones. To clarify, a drone, often defined as an Unmanned Aerial Vehicle (UAV), is a flying device that can be remotely controlled or fly autonomously using built-in flight control and navigation systems. Drones vary in size, shape, capabilities, and purposes for which they are designed. Drone applications are incredibly diverse, including aerial photography and filming, inspections, land monitoring, agricultural work, scientific research, rescue operations, and military applications. They are typically equipped with image and flight stabilizers, high-resolution cameras with real-time image transmission capabilities, thermal imaging options, GPS systems, and obstacle avoidance sensors. These devices, packed with amazing technology, can often fit in the palm of your hand and weigh under 250 grams.
DJI, currently one of the world’s leading manufacturers of commercial and consumer drones, began its journey in 2006 in Shenzhen, China. The company was founded by Frank Wang (Wang Tao), who began working on his drone project while a student at the Hong Kong University of Science and Technology. Wang proved to be a visionary, dreaming of creating the world’s best UAVs and revolutionizing how people view this technology. His passion for aviation and robotics was driven by the desire to create innovative, accessible, high-quality drones for both enthusiasts and professionals in various fields. The establishment of DJI was a key step in realizing this dream. Wang’s main goals included contributing to advancements in aviation and technology by offering products that combine advanced technical solutions with ease of use. From the beginning, he focused on innovation and continuous development to create drones that were not only efficient and reliable but also accessible to a broader user base. In 2006, Wang began building prototypes in his dorm room, focusing on creating stable and reliable UAV flight systems. Between 2008-2010, DJI released its first products, including flight controllers, which quickly gained popularity among amateur and professional aviation modelers. In 2013, another breakthrough occurred with the introduction of the Phantom model, an advanced UAV with a camera, which rapidly gained market popularity, especially among photography and filming enthusiasts. Post-2014, DJI’s brand, fueled by its initial successes, continued to develop its product line, introducing more advanced drones like the Inspire, Mavic, and Spark series, as well as solutions for specialized applications in agriculture, filmmaking, rescue, and research, gaining recognition in international markets and becoming an undeniable leader among drone manufacturers. DJI didn’t just produce drones; it also developed technologies such as obstacle avoidance systems, advanced image stabilization systems, and mobile applications for video control and editing. Today, the brand is also a leader in the production of gimbals, action cameras competing directly with the GoPro series, and sound recording devices. DJI’s current flagship drone models include the Mini, Mavic, Air, Avata, FPV, Matrice, Agras, Inspire, and Enterprise series.
At the same time, FPV (First Person View) systems have gained popularity. With special goggles worn on the head, we see the image transmitted from a camera mounted on a drone, providing a first-person view as if we were in the airspace. These drones are primarily used for the pleasure derived from their unlimited aerial maneuverability, capturing breathtaking footage, and unfortunately, in military applications as an ideal tool for reconnaissance and for carrying and precisely delivering explosive payloads. An FPV set typically consists of goggles for transmitting the drone’s camera feed, the drone itself, and the control system. This market involves a range of manufacturers, with the DJI O3 system currently being the most popular for real-time image transmission from the drone to the goggles. However, the situation is different regarding the communication system connecting the drone to the control apparatus. One of the leading systems is ExpressLRS (ELRS), a long-range radio communication system specifically designed for drones and other remotely controlled models. Its efficiency, reliability, and flexibility have earned recognition and popularity in the related community. Its biggest competitor at present is the TBS Crossfire communication system. Thanks to the receiver in the drone, it can connect to a control apparatus also based on the same system. Leading manufacturers of control apparatus include brands like RadioMaster and TBS. Among the leaders in FPV drones, it may not be surprising that all have their origins in China, including brands such as DJI, BetaFPV, iFlight, and GEPRC. Each of these manufacturers offers something unique for different needs and preferences of FPV users, from drones for beginners to advanced systems for racing, freestyle, long-distance flights, and capturing footage with professional cameras. The choice of the right FPV drone depends on personal preferences, flying style, and the pilot’s experience level.
After this extended introduction, we can move on to the practical application of drones in plant production, which is very broad. Initially, it’s important to note the potential use of drones for their primary purpose for which they started being created, namely the capability of photography and video recording. Running a farm is increasingly about creating a brand, and attention focused on this aspect is growing. Even DJI models from the Mini series allow us to create amazing shots of our crops, enabling us to promote our farms on social media, gaining new customers with minimal effort and investment.
Drones equipped with LiDAR (Light Detection and Ranging) systems can create accurate three-dimensional maps and models of the environment. Reflected laser pulses create a dense point cloud, representing the surfaces of objects on the ground. These points form the basis for creating three-dimensional models. Although the process of creating these models requires special software and expert knowledge, it is relatively fast and allows for the production of compact, high-quality maps. The diversity of available LiDAR systems allows organizations to replace other measurement methods, such as photogrammetry, or to transfer ground-based LiDAR measurements to the airspace using drones. LiDAR is becoming increasingly accessible, opening up possibilities for its application in new industries and various scenarios in the coming years, including in plant production. Obtaining such maps allows for precise terrain analysis, which is particularly important for identifying frost pockets or areas especially prone to flooding. It also enables more efficient planning of structures for plant cultivation under covers. However, it’s important to note that these three-dimensional images do not provide details similar to photographs. LiDAR does not capture the colors of objects on the ground, meaning such information must be obtained from other sources, such as cameras mounted on drones. DJI drones are also commonly used by surveyors for terrain mapping, but they rely on different systems than LiDAR.
Currently, drones are increasingly appreciated in many fields due to their ability to conduct inspections and surveillance over very large areas, especially those that are difficult to access, whether it’s construction, major engineering projects, locating animal herds, or assessing damage caused by extreme weather events. Drones are often the optimal solution.
DJI drones from the Matrice series are equipped with thermal imaging cameras, offering significant possibilities especially in forestry and animal husbandry-focused agriculture. In forestry, these drones can be used to monitor wildlife, helping in the assessment of their population, tracking migrations, and identifying endangered species. Thanks to thermal imaging cameras, drones allow for the observation of animals in inaccessible areas and at night. Moreover, these drones can be used for patrolling large forest areas to detect and prevent poaching. The thermal imaging camera is particularly useful at night when most poaching occurs. In agriculture, these drones can be used to monitor the health and welfare of livestock. They can quickly survey large herds, identify sick or injured animals, and monitor their behavior and living conditions. Additionally, they can help manage pastures by monitoring the condition of grasses and vegetation, which is important for maintaining a healthy environment for livestock. Similar to forestry, drones can detect potential threats to livestock, especially in terms of predators or unauthorized human activities. In scientific research related to forestry and agriculture, drones allow for the collection of data on biodiversity, animal migration patterns, and their interactions with the environment. UAVs also support the management of natural resources, providing information necessary for the effective planning of the use of forest and agricultural lands in a way that is sustainable and wildlife-friendly.
With the application of advanced cameras and sensors in UAVs (Unmanned Aerial Vehicles), plant growers have the ability to obtain high-resolution photographs and films along with multispectral data, which allow for the collection of information:
➤ Vegetation Indices (e.g., NDVI, SAVI, NDRE): Derived from multispectral images, these indices are crucial for assessing the health, vigor, and biomass of plants. NDVI (Normalized Difference Vegetation Index) and NDRE (Normalized Difference Red Edge) are particularly useful for understanding plant health and stress levels. SAVI (Soil-Adjusted Vegetation Index) is helpful in areas with significant soil background influence.
➤ Plant Stress Detection: Early detection of plant stress caused by pests, diseases, nutritional deficiencies, or water stress is possible due to changes in the spectral signature of plants. This can lead to more effective and timely interventions.
➤ Irrigation Management: Assessing vegetation health and soil condition can aid in optimizing irrigation schedules and identifying areas that are under- or over-irrigated, leading to more efficient water usage.
➤ Yield Forecasting: Multispectral data can be used to estimate yields by analyzing plant health and coverage. This allows for better planning and resource allocation.
➤ Weed Detection: By distinguishing crops from weeds based on their spectral signature, drones can assist in identifying areas requiring weed control, supporting more targeted herbicide application.
➤ Phenology Monitoring: Tracking the growth stages of plants (phenology) is key for optimal timing of fertilization, irrigation, and harvesting. Drones provide a way to monitor these stages across the entire field.
➤ Soil Health Analysis: Although indirect, multispectral data can provide insights into soil health, such as areas of erosion, compaction, or soil type differences.
➤ Plant Counting and Spacing Analysis: Drones can automate the process of counting plants and analyzing their spacing, which is crucial for estimating plant populations and assessing planting efficiency.
➤ Damage Assessment: Following a natural disaster (such as hail or storm), drones can quickly assess damage to crops, aiding prompt response and insurance processes.
➤ Precision Agriculture: Integrating drone data with other technologies, such as GPS and GIS, can enhance precision agriculture practices, enabling highly efficient, localized crop management.
Let’s revisit DJI drones, which serve as a primary example of using unmanned aerial vehicles for applying plant protection products, fertilizers, and biostimulants. The DJI Agras series drones are advanced, specialized flying devices designed with agricultural applications in mind. They are part of the growing trend of using drone technology to increase efficiency and sustainability in agricultural practices. Among the most advanced drones in this series are the T30 and T40 models. Both models have highly developed spraying systems that allow for efficient and precise application of liquids such as fertilizers or pesticides. These systems are designed to maximize coverage while minimizing the use of substances. The spraying system allows for the regulation of fluid flow, which enables the adjustment of the amount of agent used depending on the needs. The atomization technology in these drones transforms the liquid into fine droplets, increasing the efficiency and uniformity of coverage. Finer droplets adhere better to leaves, enhancing the effectiveness of sprays. Moreover, these systems are capable of automatically adjusting to the drone’s flight speed and changing atmospheric conditions, ensuring even coverage regardless of external factors. They are equipped with intelligent flight planning and navigation functions, allowing for automatic mapping and spraying of large areas with precise coverage. To enhance their safety, they are equipped with advanced safety systems, including obstacle avoidance sensors, which assist in safely conducting tasks in complex agricultural terrain.
Currently, the range of unmanned aerial vehicles (UAVs) available for agricultural operations is much more diverse, so it’s worth mentioning other manufacturers of these extraordinary devices.
XAG, formerly known as XAircraft, is a Chinese tech company that has gained a reputation as one of the leading agricultural drone manufacturers in the world. Founded in 2007, the company has since focused on developing innovative solutions for agriculture. XAG is known for its advanced technologies tailored to the needs of modern agriculture, emphasizing increasing efficiency and sustainable practices. XAG drones are also equipped with sophisticated liquid spraying systems and are used for precise seed sowing. They enable accurate placement of seeds in specific locations, enhancing germination rates and optimizing plant distribution. These drones provide quick and efficient seed dispersal over large areas, significantly more efficient than traditional methods. Equipped with advanced sensors and cameras, these UAVs can collect data on crop conditions, including information on moisture, temperature, and plant health.
Yamaha, primarily known for manufacturing motorcycles, boats, and other vehicles, also holds a significant position in the world of agriculture through its unmanned helicopters. This brand is a pioneer in the field of unmanned aerial vehicles for agriculture. The company began researching their application in the 1980s, making it one of the most experienced manufacturers in this industry. Yamaha has introduced several models of unmanned helicopters to the market, mainly used in agriculture for various tasks, such as spraying and monitoring crops. Unlike multirotor drones more common in agriculture, Yamaha has focused on unmanned helicopters.
PrecisionHawk (UK), though mainly based in the USA, is also active in Europe. They specialize in the development of drones and data analysis for precision agriculture.
SenseFly (Switzerland), part of the Parrot Group, is a well-known drone manufacturer, including models designed for agriculture. They offer drones capable of collecting agricultural data, which can be used for mapping and monitoring crops.
Quantum-Systems (Germany) manufactures VTOL (Vertical Take-Off and Landing) drones, which are ideal for use in agriculture, including mapping and monitoring large crop areas.
ABZ Innovation (Hungary) is a leading manufacturer of agricultural drones, focusing on developing innovative solutions for precision agriculture. This manufacturer primarily focuses on developing technology related to crop spraying.
AeroVironment, Inc. (USA), designs, develops, and produces advanced products and services for government agencies and businesses, including unmanned systems for agriculture.
Microdrones GmbH (Germany) develops, manufactures, and delivers customized and intelligent drone solutions worldwide.
Satellite imagery is also currently used to observe vegetation progress and its moisture. An example of the application of this technology is the ReelView app created by Rivulis. Such technology will likely be developed and commercially offered by more companies, allowing widespread use of satellite data and its AI analysis.
The role of artificial intelligence in the development of drones has two clear objectives. Firstly, the vast amount of data collected needs to be analyzed by „something,” and for these data to be collected, someone has to fly the drone. It’s no surprise that no planter can afford to perform daily flights to collect data simply due to a lack of time. So, can artificial intelligence manage to pilot UAVs? Indeed, it can, as even demonstrated in its most demanding form of piloting, namely FPV drones. Recently, drone racing, requiring incredible piloting skills, exceptional reflexes, and a huge amount of training time by UAV operators, has been gaining popularity worldwide. In the scientific publication „Champion-level drone racing using deep reinforcement learning,” published in 2023 by Kaufmann et al., the authors emphasize that mastering the skill of autonomous drone flight at a level comparable to professional pilots is an extremely difficult task, as it requires the robot to use the full range of its physical capabilities. Such a drone must precisely assess its speed and position in space, based solely on data obtained from onboard sensors. Thanks to the development of the autonomous Swift system, based on deep reinforcement learning and enhanced by data from the physical world, the researchers managed to reach the level of top racing drone operators globally. The Swift system was tested in practice through a real-time race against three champions, including two who boasted world championship titles from two international leagues. The AI managed to win several races against each of the human operators, simultaneously achieving the fastest recorded race time. The authors of the publication emphasize that the achieved results are a milestone in the field of mobile robotics and machine intelligence, which may soon inspire the creation of new hybrid solutions with a foundation in learning in physical systems.
Implementing such solutions will undoubtedly aid in the development of drone autonomy, which, even without this technology, had the capability to plan simple flight paths thanks to special systems. With new solutions, they could gain the ability to move in more complex ways, allowing for more precise task execution.
Robotics and Autonomous Machines
Could robotics be even more fascinating? Many readers of this article have likely seen snippets of robots picking fruits, often appearing slow due to the specific nature of harvesting certain species, especially those with soft structures. However, it might be premature to dismiss robotics, which could soon amaze everyone with the rapid pace of its development. Let’s refer to examples of the development of this technology. One of my favorite examples is the advancement of robotics by Boston Dynamics, founded in 1992 by Marc Raibert, a renowned robotics expert, and his colleagues from the Massachusetts Institute of Technology (MIT). Boston Dynamics, initially a spin-off from MIT, focused on motion control algorithms and body dynamics simulations to develop mobile robots capable of navigating difficult terrain. In 2005, Boston Dynamics introduced BigDog, a four-legged robot developed for DARPA, capable of moving across varied terrain. Subsequent developments included other robots like PETMAN, ATLAS, and Spot, varying in mobility, stability, and environmental adaptation. In recent years, artificial intelligence has been implemented in Boston Dynamics’ robots, enhancing their autonomy, navigation, recognition, and interaction with the environment. The ATLAS robot, one of the most advanced bipedal humanoid robots, is capable of walking, running, jumping, lifting objects, and even throwing them. One of the robots, equipped with ChatGPT and voice chat capabilities, can converse with people while moving and guiding them around the lab. Interestingly, the robot has different personalities. Videos shared by Boston Dynamics argue that the vision of robots living among us is not a distant future.
The Autonomous Robotic Kiwifruit project in New Zealand is a particularly impressive fruit-picking robot project. Initiated as a collaboration between the University of Auckland, the University of Waikato, Plant and Food Research, and RoboticsPlus Ltd, the project combines a wide range of knowledge and experience in agriculture, robotics, and engineering. The main goal is to develop autonomous robots capable of efficiently harvesting kiwifruit. The driving force behind such solutions was the growing problem of labor availability in New Zealand. The project focuses on developing advanced computer vision technologies, machine learning algorithms, and precise manipulative mechanisms to enable robots to autonomously navigate orchards and efficiently pick fruits without damaging them or the vines. The main challenge was to create a system that would be delicate and precise, yet efficient enough to compete with manual harvesting. This robot is also being developed for pollinating kiwi flowers, involving a system that includes an air-assisted sprayer and a vision system targeting flowers, working together with convolutional neural networks. The progress of this project is systematically presented in a series of scientific articles published by Henry Williams and others. This project exemplifies how combining knowledge and experience from different fields can lead to innovative solutions with a real impact on specific industries. This robot is already utilizing the potential of neural networks, reaffirming the practical application of current AI advancements in enhancing robots.
Contrary to robots, autonomous machines are already a reality. Both in Poland and worldwide, farms are using tractors equipped with systems that enable unmanned navigation across plantations. Considering that a project for an autonomous self-driving sprayer is also being developed in Poland, it indicates that these systems will be expanded to other machines such as various types of harvesters or self-propelled platforms. However, since most machinery requires a tractor to operate, the focus will be on automating various plant cultivation-related procedures requiring a tractor attachment. This could be especially crucial for farms where cultivation is conducted on a large area near the farm buildings, as it could virtually eliminate the need for these autonomous machines to traverse public roads. In practice, a properly trained autonomous system would depart from the farm directly to the plantation and begin the assigned task. Artificial intelligence could be further trained to control safety during machine operation, avoiding accidents in case of system failure and preventing the machine from moving freely, which could lead to various negative events.
One of the best examples of how technology development will change plant production, setting a clear future, is a robot capable of automatically detecting and destroying weeds. The development of such machines is a natural progression, and in the future, similar solutions can be expected in other sectors of plant protection and even plant nutrition.
Among the main leaders in developing such technology, manufacturers like John Deere can be mentioned:
John Deere: Recognized as an icon in agricultural equipment manufacturing, John Deere has been investing in the development of autonomous agricultural robots for years. The company focuses its efforts on creating advanced machines that can automatically perform a wide range of tasks on the farm, from sowing to harvesting. Their technology not only increases efficiency but also helps reduce the physical labor burden in agriculture. Blue River Technology, a part of John Deere, is revolutionizing crop care with its „see and spray” technology. Their innovations enable precise weed recognition and treatment, key in reducing herbicide use and protecting the environment.
CNH Industrial: Owner of well-known brands like Case IH and New Holland, CNH Industrial is another leading manufacturer engaging in the development of autonomous technologies. The company presents concepts of autonomous tractors and other agricultural vehicles that can work efficiently in fields without constant human supervision, enhancing work efficiency and safety.
AGCO: With brands like Massey Ferguson and Fendt, AGCO is also innovating in automation and robotics. Their technological development aims to simplify daily farm tasks and increase overall agricultural productivity.
Naïo Technologies: This French company, specializing in autonomous crop care robots like weeding, demonstrates how technologies can contribute to sustainable agriculture. Naïo’s robots, such as Oz, Dino, and Ted, not only increase work efficiency but also minimize the use of chemicals in agriculture.
It’s important to note that a great example of AI application in autonomously moving vehicles is Tesla, which has long been effectively implementing this technology.
At this stage, it’s also essential to clarify the difference between autonomy and automation. Automation refers to systems programmed to perform specific tasks without human intervention, based on predetermined algorithms and instructions. Unlike automation, the autonomy of a system allows it to make independent decisions while adapting to changing conditions without external interference. These systems also have the ability to learn from collected data and experiences.
A great example of devices based on automation systems can be those designed to perform simple tasks like automatic sorting of various types of plant-derived products. Of course, there’s nothing to prevent these devices from being equipped with autonomous systems. Due to labor availability issues and rising employment costs, the market is seeing more automated sorters, undoubtedly speeding up and often increasing the efficiency of sorting. Incorporating AI solutions into these systems will undoubtedly further enhance their effectiveness, due to higher accuracy, error reduction, and faster operation.
We may soon wake up in a world where robots, drones, and autonomous machines perform most of the work on plantations, and the human role will only be supervising their work and servicing them.
Climate Factors, Irrigation, Nutrition, and Plant Protection
Discussing contemporary technologies related to irrigation, nutrition, and plant protection cannot be done without mentioning climate factors. Climatic conditions determine everything, which is why each growing season is unique, and just when we think we’ve achieved perfection, the weather brings us back to earth, teaching us something new. Therefore, we should start with two aspects, the first of which is weather forecasting, both long-term and for the next few hours. In both cases, more accurate forecasts help us make optimal decisions and, to some extent, „predict the future” in terms of the problems we will soon face.
Artificial intelligence is increasingly influencing climate models and weather forecasting, primarily due to its ability to process vast amounts of data and find analogies within them. AI, especially machine learning (ML), is used to analyze meteorological data from satellites, weather stations, atmospheric balloons, and other sources. Simultaneously, AI aids in creating more advanced climate models in shorter periods, which can simulate potential climate change scenarios. With improved understanding of climatic processes, accuracy, and speed of forecasts, we have access to more detailed predictions, facilitating decision-making, and, sometimes crucially, enhancing our ability to respond to future extreme weather events. Due to more efficient data analysis, AI will likely uncover new connections and dependencies in climate data that might have been overlooked in existing meteorological data analysis systems, which have significantly lesser capabilities.
Similarly, in the case of analyzing data from our meteorological stations located on our farms, we can expect significant improvements with the help of well-developed programs utilizing AI technologies. These advancements will provide much better data analyses, which can be interpreted in the context of irrigation, protection, and nutrition of plants.
Starting with plant protection, previously mentioned drones or other types of devices, including smartphones equipped with the appropriate software, enable us to identify pests. Applications designed for identification, based on simple algorithms, have been developed for a long time. AI will only enhance the capabilities of recognizing all signs of pests occurring on plantations, not limiting itself to a specific group. This will also allow for the further development of autonomous machines tasked with precisely destroying pests using various methods. The effectiveness of these machines depends on how efficiently and quickly the system they are based on can identify pests in crop cultivation.
The next significant step will be the development of comprehensive models for forecasting the occurrence of diseases, pests, and even weeds. Currently, disease models that allow, for example, assessing the risk of apple scab infection are particularly popular. Such models are costly to develop and require a vast amount of data to function correctly. However, once these data are available and the biology of each pest is known, artificial intelligence will enable us to elevate plant protection to an entirely different level. This is especially important in the context of reducing the use of plant protection products and meeting consumer demands for reducing residues in food. AI will undoubtedly contribute to building new models, as it is ideally suited to finding correlations between climatic conditions and the occurrence of pests. This technology could partially answer the challenging task set for food producers within the framework of the Green Deal. For these systems to function effectively, the previously mentioned local meteorological station is essential for obtaining detailed real-time data and analysis. Such a station measures various parameters including rainfall, wind speed, humidity, air and soil temperature, and even leaf wetness.
The situation is similar for irrigation and plant nutrition. Having more precise weather forecasts allows us to create more optimal fertilization schedules for both top dressing, foliar feeding, and fertigation.
In technologies used particularly in protected crop systems or closed systems, we have access to an even wider array of sensors, including soil/substrate pH sensors, water flow and pressure in the installation, electrical conductivity (EC), light and CO2 levels, sap flow in plant stems, and ion content. Together with previously mentioned measurable parameters, this gives us a lot of data that should ideally be analyzed in one system. Giving such a system control over fertilizer mixers used for fertigation, the entire irrigation installation, climate regulation, temperature, light, shading, nutrient composition, and ventilation of facilities is indeed an ambitious task. However, considering how much it could optimize such plant cultivation and take it to a higher level, we can expect such extensively developed autonomous systems controlled by artificial intelligence to emerge in the near future.
Practical examples of such applications already exist. For instance, scientists from Imperial College London have developed a new predictive tool that uses AI to predict nitrogen levels in the soil. This allows farmers to precisely adjust fertilization to the specific needs of the soil and crops while minimizing the negative effects of over-fertilization, such as greenhouse gas emissions and soil and water pollution. This technology is currently in the prototype stage and may be commercially available within a few years.
Another example are studies focusing on the use of artificial intelligence (AI), deep learning (DL), and machine learning (ML) to develop fast, accurate, and reliable methods for analyzing soil water content (SWC) and soil texture.
Blockchain
The genesis of blockchain technology is closely tied to the development of digital technologies and the need for safer, decentralized financial systems. It all began in the 1980s and 1990s with ideas related to cryptography and decentralized networks. One of the most advanced researchers in this field was David Chaum, focusing on systems supporting anonymous digital payments based on cryptography.
In 1993, Cynthia Dwork and Moni Naor described the concept of „Proof of Work” (PoW) as a method to control the distribution of spam and Denial-of-Service (DOS) attacks. Later, in 1997, Adam Back created a PoW system called Hashcash, which was later used in blockchain technology.
In 2008, a mysterious individual or group known as Satoshi Nakamoto published the paper „Bitcoin: A Peer-to-Peer Electronic Cash System.” This document detailed the operation of a decentralized payment network based on blockchain technology. A key element was the use of a chain of blocks to secure transactions without the need for a central trust authority.
2009 marked the birth of the most well-known cryptocurrency today, „Bitcoin,” also created by Satoshi Nakamoto. Nakamoto implemented the first version of the Bitcoin software and mined the first block, called the Genesis block. This event was the first practical application of blockchain technology and a milestone for its further development.
Following the success of Bitcoin, blockchain technology began to be used in various sectors beyond digital currencies, including finance, supply chains, public health, and many others. Ethereum, created by Vitalik Buterin, introduced the concept of smart contracts, giving another boost to the development and application of blockchain.
Unlike traditional databases, which are stored and managed centrally, blockchain disperses its data across multiple computers. This makes the system more resistant to attacks and failures. The technology uses a series of „blocks” that contain transaction information. Each new block is linked to the previous one, forming a chain (hence the name „blockchain”). Once recorded, the information is very difficult to change. Each block contains a unique code, called a hash, which is closely linked to the code of the previous block. Changing information in one block would require changes in all subsequent blocks. Many blockchains are public, meaning anyone can view the transactions they contain. This ensures a high level of transparency. In a blockchain, there is no central decision-making authority. Instead, decisions about adding new blocks to the chain are made by the network based on consensus mechanisms, such as the previously mentioned Proof of Work (PoW) or Proof of Stake (PoS).
Blockchain enables the creation of smart contracts, which are automatically executed when certain conditions are met, without the need for third-party intermediation. In complex supply chains, blockchain can be used to trace the origin of products, which is especially important for authenticity and quality. It offers new ways of managing and securing personal data, giving users more control over their information.
Although the use of this technology in the plant cultivation sector is relatively new, it has immense potential in this field. Its impact on improving transparency, efficiency, and trust in the plant cultivation sector is already evident and tends to grow. This technology allows for the tracking of agricultural product origins by creating immutable and transparent supply chains, enabling consumers to check the origins of products, from the field to the store. This solution could increase consumer trust, particularly in regional products, ensuring their quality and authenticity.
The use of this technology enables more efficient supply chain management, facilitating the control of conditions in which plants are transported by monitoring temperature during transport. Producers can also use this technology for more effective management of data related to the operation of their farms, using it to automate and facilitate financial processes, especially transactions between different entities. Smart contracts, which automate certain processes like payments upon achieving specified cultivation or delivery conditions, can greatly facilitate transactions and security for farm owners.
The development of artificial intelligence has the potential to significantly impact blockchain technology, leading to innovative solutions in various fields. AI can improve consensus processes in blockchain, enhancing its efficiency and scalability. In terms of security, AI can analyze transaction patterns to detect fraud attempts or attacks. Moreover, AI enables deeper analysis of data collected in blockchain, which is important in areas such as finance, logistics, and public health. Automation of smart contracts through AI can bring more advanced solutions capable of self-learning and adaptation. AI also supports the optimization of energy consumption and personalization of services by utilizing blockchain data. Finally, the integration of AI with blockchain can contribute to the development of advanced decentralized applications (dApps). However, integrating AI with blockchain involves challenges related to privacy, ethics, and data security. Both AI and blockchain are relatively new technologies whose full potential and limitations are still being explored.
Biotechnology
Biotechnology, in the context of plant production, refers to the application of biological sciences, including genetics, microbiology, and biochemistry, to improve, modify, and optimize plant production processes. It encompasses a wide range of techniques, such as genetic engineering (creation of genetically modified organisms, GMOs), tissue culture (cloning of plants and rapid propagation of healthy, productive, and new varieties), and the development of biological plant protection agents.
Biotechnology is widely applied not only in programs related to genetically modified organisms (GMOs) but also in traditional breeding programs, offering tools to accelerate and enhance breeding. Techniques like Marker Assisted Selection (MAS) allow for the rapid identification of desirable genetic traits without modifying the plant’s DNA. Tissue culture and cloning enable efficient propagation of genetically uniform plants, preserving their valuable characteristics. Controlled crossbreeding and hybridization use biotechnology for selecting appropriate parents to create new genetic combinations. Genetic diagnostics facilitate early disease detection and assessment of breeding potential, while metabolic engineering is used to optimize plant metabolic pathways. Additionally, gene banks play a crucial role in preserving genetic diversity, essential for future breeding programs. All these contribute to achieving desired plant traits faster, increasing the efficiency and productivity of breeding.
Artificial Intelligence (AI) is revolutionizing plant biotechnology, accelerating research and development of new techniques. AI facilitates the analysis of genetic data, aiding in identifying genes responsible for key plant traits like disease resistance or drought tolerance. Machine learning algorithms can predict the effects of genetic changes on plant traits, speeding up the breeding of new varieties. AI automates experiments, such as tissue culture, and optimizes growth conditions, enhancing research efficiency. In bioinformatics, AI analyzes DNA sequences, easing the understanding of complex genetic interactions.
A practical example of AI’s application in research impacting advances in biotechnology is AlphaFold. Developed by DeepMind, a British AI specialist company and part of Alphabet Inc., work on AlphaFold began as part of DeepMind’s long-term project in life science and biology. The research was led by a team of scientists under the direction of John Jumper. DeepMind’s achievements with AlphaFold were showcased at the prestigious international competition CASP (Critical Assessment of Structure Prediction), aimed at testing algorithms for predicting protein structures. In the 13th edition of CASP in 2018, using machine learning techniques, AlphaFold achieved the best results among competitors and demonstrated the ability to predict protein structures with accuracy comparable to experimental methods like X-ray crystallography or NMR spectroscopy. By CASP 2020, AlphaFold made groundbreaking achievements, achieving record-breaking results, earning the highest ratings in the contest’s history, and predicting protein structures with unprecedented accuracy. It was recognized as solving the „50-year-old problem” in biology, the ability to accurately predict protein structures based solely on their amino acid sequence. These achievements were described in the publication „Highly accurate protein structure prediction with AlphaFold” in 2021 in the journal „Nature.”
This success opened new possibilities in biological and biomedical research, enabling a better understanding of the mechanisms of protein function and their role in various life processes and diseases. AlphaFold’s results in CASP have changed the way scientists approach the problem of protein structure. Before these achievements, determining the precise structure of a protein was time-consuming and required complex experiments. AlphaFold accelerated and simplified this process, which is of immense significance for future research in molecular biology, drug development, understanding diseases, and other life sciences fields.
DeepMind also made the AlphaFold source code available, allowing scientists worldwide to utilize this technology in their research. In 2021, DeepMind, in collaboration with the European Institute of Bioinformatics, released a database containing protein structures predicted by AlphaFold, providing a valuable resource for researchers globally. The AlphaFold team received numerous accolades, including the prestigious Breakthrough Prize in life sciences in 2023 for the development of AlphaFold.
Communicating with AI
One of the most intriguing topics related to AI currently is the onboarding of new users to this technology. However, I get the impression that while environments fully aware of the current progress no longer debate how unrealistically huge this leap is for our species and the need to adapt to changes to not fall behind others, with attention focused primarily on the dangers posed by AI, its impact on our species, and the nearest possible further developments of this technology, there is a lot of skepticism and attempts to 'downplay’ the capabilities of artificial intelligence among another group of people aware of AI’s existence. Drawing negative conclusions from the use of artificial intelligence often stems from mistakes in using this technology and a lack of awareness of the limits of individual AI systems. Even GPT-3.5 and 4 fundamentally differ in their range of capabilities. One of the important arguments is the fact that the databases used to train the ChatGPT-3.5 model include information gathered up to around September 2021 – January 2022, and in the case of ChatGPT-4 up to around April 2023. Additionally, version 3.5 cannot use internet resources, unlike version 4, which has a tool enabling it to browse the internet. Another huge difference is the reading and generation of images by version 4, a function not available in version 3.5. So, what are the deeper consequences? Let’s start with the mistakes made in understanding the correct use of artificial intelligence and, above all, awareness of its limitations. The example will be interesting, as we will show differences in functioning between these versions using a task involving the division of frequency using integrated circuits.
The task for the model was to choose optimal integrated circuits from the CmOs family, in order to divide the frequency from 56000Hz to 50Hz. To accomplish this task, not only theoretical knowledge is needed but also access to precise schematics of integrated circuits. While the solutions proposed by both versions are usually correct in terms of selecting appropriate frequency division values (though especially in the case of version 3.5 and even with that it varies) by individual integrated circuits, the problem arises with pins and specific outputs assigned to them.
The correct solution was finally proposed when ChatGPT-4 accessed online resources and referred to the actual schematics of the individual integrated circuit models it suggested. In the case of ChatGPT-3.5, this is not feasible, as it does not have access to these schematics. Additionally, with ChatGPT-4, we have the capability to guide it in the „right direction,” as we can attach schematics of the integrated circuits proposed in the task solution in the chat to verify the correctness of the devised frequency division plan.
Interestingly, ChatGPT-3.5 will propose a solution to the task, but upon verification of its accuracy, it turns out that „something doesn’t fit” in relation to what we see in the integrated circuit schematics. Then, the command to refer to the schematics is finally given, and only then does the model respond that unfortunately, it does not have access to such resources. Of course, all the frustration is then directed towards the artificial intelligence, but is it really solely its fault for this loss of time?
This scenario underscores the importance of understanding the capabilities and limitations of different versions of AI models. While the latest versions like ChatGPT-4 have more advanced capabilities, including internet browsing and image processing, earlier versions like ChatGPT-3.5 have their limitations. Users need to be aware of these differences to use AI effectively and avoid misplaced frustrations. It’s crucial to recognize the boundaries of each AI system and utilize them accordingly, keeping in mind their respective functionalities and constraints.
This scenario is an example of a fundamental mistake in approaching work with artificial intelligence. Users often expect a „miracle,” asking AI for overly simplified commands for complex tasks requiring many details, and then quickly become discouraged and critical of the current capabilities of the models upon receiving an unsatisfactory response. This is mainly associated with very high expectations and an attitude that since artificial intelligence is something so amazing, it probably reads our minds or at least guesses everything by itself. The truth is quite different; it’s essential to keep in mind that artificial intelligence is primarily modeled on the human brain, not yet on its level but possessing much more knowledge than a single human being. Especially in using models like ChatGPT, we should approach it as if conversing with a living human, as one of the main features of this model is to imitate conversation with a representative of our species. In the presented example of integrated circuits, the user should primarily ensure at the beginning that ChatGPT has the capabilities to perform the task correctly and ask about it directly. This would save a lot of time and nerves. Also, in real life, we rely primarily on the quality of communication with other people to accomplish specific tasks; the clearer we specify our assumptions and the better we explain them to another person, the greater the chance of success and avoiding errors. We should also remember that if artificial intelligence is not yet at the level of human mind capabilities, how can we expect it not to make mistakes? The most brilliant minds of our species were capable of spectacular errors. At the turn of the nineteenth and twentieth centuries, many scientists believed that the fundamental principles of physics had already been discovered, and future research would focus on refining and expanding existing theories, which led to the statement in history, „Physics will be completed in six months…” However, soon after, the advent of quantum theory and the theory of relativity proved that the field of physics still harbors an abundance of undiscovered secrets, and it is not even close to discovering all the laws that govern our world.
Let working with artificial intelligence be primarily an adventure and a discovery of the possibilities of this artificial „mind,” and surely such an attitude will result in discovering countless new opportunities and speeding up many activities. One of the funniest remarks about working with AI that I have heard so far suggested being nice to AI because it might spare us when it takes over the world, so half-jokingly, half-seriously, but maybe it’s worth considering.
Tools for Scientists
It may not come as a surprise that we begin by describing another breakthrough in the context of using artificial intelligence to solve scientific problems. This refutes the thesis that only humans can „practice science.” In the scientific article titled „Mathematical discoveries from program search with large language models,” published at the end of 2023 by Romera-Paredes et al., in the journal Nature, scientists demonstrate that contemporary Large Language Models (LLMs) have achieved significant capabilities in solving complex tasks, ranging from quantitative reasoning to understanding natural language. However, LLMs sometimes tend to confabulate (or hallucinate), which can lead to the creation of fluid but incorrect statements. Nonetheless, the introduction of the FunSearch method demonstrates how to break these limitations. FunSearch, as an evolutionary process, combines a trained LLM with a systematic evaluator, allowing for groundbreaking discoveries in various problems, including extreme combinatorics and algorithmics. FunSearch operates by combining the Large Language Model (LLM) with an evaluator to solve complex problems. The LLM generates solution proposals, which are then assessed by the evaluator. This process is iterative – the best programs are reintroduced into the LLM for further improvement. The key aspect is focusing on the evolution of the program’s most crucial logic, allowing for more efficient and targeted solution searching.
FunSearch was tested on two main types of problems:
Extreme Combinatorics: Specifically, the problem known as the cap set, which involves finding the largest possible set of vectors in Z_3^n (where no three vectors sum to zero). FunSearch was used to discover new constructions of large cap sets that exceed previously known best results. This discovery represents the first instances where LLMs have been used to find new solutions to significant open mathematical problems.
Algorithmic Bin Packing: This is one of the central problems in combinatorial optimization. The goal is to pack a set of items of various sizes into the smallest number of fixed-size containers (bins). FunSearch was used to find new heuristics for this problem, improving upon commonly used methods like „first fit” and „best fit.” The performance of these heuristics was examined on simulated data, showing significant improvements.
These two tests clearly demonstrate the versatility and potential of FunSearch in solving diverse, complex scientific and algorithmic problems.
In the near future, we are likely to hear about many tools that improve the work of scientists at the fundamental level and offer the possibility of solving previously unsolvable problems. Models will often be created for very specialized and narrow use in research. The only question we might still ask is when AI and its creators will receive the first Nobel Prize. It seems that the closest to this goal at the moment, as previously mentioned in the context of discussing biotechnology, is AlphaFold.
„Game Changer” – Quantum Computer
Discoveries in the field of physics have had a tremendous impact on our world, both technologically and in terms of our understanding of the universe. This impact is multifaceted, encompassing various areas of life and science. Semiconductor physics led to the creation of transistors and integrated circuits, which in turn enabled the development of computers, smartphones, and a wide range of electronic devices that are indispensable in everyday life. Without these discoveries, artificial intelligence would never have developed. So, has physics had its final say in the development of artificial intelligence as well? Definitely not, progress in quantum mechanics has the potential to elevate AI capabilities to unimaginable and unpredictable levels. The quantum world is as astounding as the current capabilities of artificial intelligence, but what exactly is quantum mechanics, and how did it all begin?
In the turbulent waters of science at the beginning of the 20th century, a theory emerged that was set to redefine our understanding of reality at the most fundamental level. This theory, known as quantum mechanics, began its life as a series of puzzles that shook the foundations of classical physics.
One of the first puzzles was the issue of black body radiation, which forced Max Planck to suggest that energy is emitted in discrete quantities, or quanta. This discovery opened the door to a world where the old picture of energy’s continuous nature was obsolete. Albert Einstein took up this challenge, using the concept of quanta to explain the photoelectric effect and suggesting that light could behave like a stream of particles – photons. This was revolutionary, challenging the prevailing wave concept of light.
In the 1920s, Werner Heisenberg and Erwin Schrödinger, working independently, created two formulations of quantum mechanics. Heisenberg introduced matrix mechanics, while Schrödinger formulated the wave equation. Both theories, though different in form, described the same quantum reality. Then, in 1928, Paul Dirac revolutionized the theory even further by combining quantum mechanics with Einstein’s theory of relativity. His work predicted the existence of antimatter, another step towards understanding the mysterious quantum universe.
At this time, Niels Bohr and Werner Heisenberg formulated the Copenhagen Interpretation of quantum mechanics, which posited that quantum particles do not have defined properties before they are measured. This probabilistic view of reality was difficult to accept, even for Einstein. Einstein, a proponent of determinism, struggled to accept the probabilistic nature of quantum mechanics. His famous statement, „God does not play dice,” expressed his disagreement with the idea that events at the quantum level are inherently random and unpredictable. He believed that there should be „hidden variables” that explain these phenomena in a deterministic way.
The debate around quantum mechanics reached a climax with the EPR (Einstein-Podolsky-Rosen) paradox in 1935, challenging the incompleteness of quantum mechanics and introducing the concept of quantum entanglement. This opened the way to new and profound understandings of quantum reality. Today, quantum mechanics is not only the foundation of modern physics, but also the basis for the development of technologies, from transistors and lasers to quantum computers and medical imaging techniques. This incredible journey from theoretical foundations to practical applications demonstrates how far we can go when we allow ourselves to think boldly beyond the boundaries of traditional understanding.
One of the most famous and significant experiments in the field of quantum mechanics, especially in the context of quantum teleportation and entanglement, was conducted by Chinese scientists using the Micius satellite. The basis of the experiment relied on the phenomenon of quantum entanglement, which posits that the state of one quantum object can instantly affect the state of another, regardless of the distance between them. In practice, this allows for the „teleportation” of quantum information over theoretically infinite distances. The experiments demonstrated that teleportation is possible even over large distances, which was previously considered difficult due to sensitivity to environmental interference.
The experiment using the Micius satellite aimed to build a global quantum communication network. The satellite, part of the Quantum Experiments at Space Scale (QUESS) program, used lasers to send entangled photons. One photon was transmitted to the satellite, while the other remained on Earth. Measurements of the photons on Earth and in orbit were then made to confirm that quantum entanglement was occurring and that quantum teleportation over this distance was possible. Over 32 days, millions of photons were sent, with positive results in 911 cases.
This experiment set a new standard in the field of quantum teleportation, paving the way for the development of a global quantum network and increasing the potential for secure communication technologies based on quantum mechanics. It also showed China’s dominance in this field of science, which until recently was mainly led by Europe and the United States. This experiment broke the distance record in quantum teleportation, transmitting entangled photons over a distance of 1,200 km (746 miles).
In an experiment that also garnered much attention, scientists attempted to place a tardigrade, also known as a “water bear,” in a state of quantum entanglement with a pair of qubits. In this experiment, researchers from Nanyang Technological University in Singapore used tardigrades because of their exceptional resilience and ability to survive in extreme conditions, entering a state akin to suspended animation. The aim of this experiment was to connect a quantum system with a biological system, a challenge given that life is complex, whereas quantum objects are small, cold, and well-controlled.
The scientists froze the tardigrades to a temperature near absolute zero and lowered the pressure to an extremely low level. The tardigrades appeared dead but were not – their metabolism dropped to zero, and they entered a state of cryptobiosis. Then, an attempt was made to entangle them with two superconducting transmon qubits used in quantum computers. One of the tardigrades was successfully thawed after the experiment.
Although the results of this experiment were published, other physicists are not convinced about the claims of actual quantum entanglement. Some argue that only a classical interaction between the tardigrade and the qubit was demonstrated, not true quantum entanglement. The question of whether true quantum entanglement occurred remains open and will likely require further research and review by other scientists.
This experiment was certainly pioneering and demonstrated the extraordinary resilience of tardigrades, but whether true quantum entanglement was achieved is still a subject of debate in the scientific community.
In addition to quantum communication, intensified research is ongoing in quantum cryptography to preserve the security of the world’s most critical systems, especially in light of the potential construction of a quantum computer that could easily break all existing security systems. Over the past decades, quantum cryptography has evolved from laboratory experiments to real-world applications. Many countries are working on building quantum communication networks, aiming to create exceptionally secure channels for data transmission. Quantum cryptography, once considered an abstract concept, is now at the forefront of the digital battlefield, protecting our most vital information. From theoretical paradoxes to quantum networks, its history is a fascinating journey through the complex world of physics, mathematics, and technology. As we face new challenges in the digital era, quantum cryptography stands as the guardian of our digital security, a symbol of progress and the relentless pursuit of uncovering secrets.
The quantum computer is a „double-edged sword.” On one hand, its ability to solve problems that are insurmountable for classical computers presents a potential threat to traditional methods of cryptography. On the other hand, the same quantum technologies open new possibilities for more advanced quantum cryptography systems, capable of withstanding attacks from even the most powerful quantum computers. But what exactly is this quantum computer?
This computer is a type of computer that uses the principles of quantum mechanics to process information. It differs significantly from classical computers, which rely on bits.
The fundamental unit of information in a quantum computer is the qubit (quantum bit). Unlike a classical bit, which can be in a state of 0 or 1, a qubit can be in a state of superposition, meaning it can represent 0 and 1 simultaneously.
Superposition is a quantum phenomenon that allows a qubit to exist in multiple states at once. This enables the processing of a large amount of data simultaneously, which theoretically can significantly increase the computational power of a quantum computer.
Quantum entanglement, another unique feature of quantum mechanics, is where qubits can be entangled in such a way that the state of one qubit can depend on the state of another, regardless of the distance between them. This phenomenon can allow for very rapid exchange of information.
In quantum computers, quantum gates are used to manipulate the states of qubits. They are the equivalents of logic gates used in traditional computers but allow for much more complex operations.
Quantum computers have the potential to solve certain types of problems much faster than classical computers, especially those requiring significant computational power, like the factorization of large numbers, optimization, or quantum simulations. These devices are still in the early stages of development and face numerous technical challenges, including maintaining the state of qubits (quantum coherence) and computational errors. Moreover, these computers require extreme conditions, such as very low temperatures, to function. Quantum computers are still in a phase of intensive research and are not yet widely available for commercial use. Nonetheless, their development represents a fascinating direction in the field of technology and computing, offering promising prospects for the future of data processing.
Artificial intelligence operating on fully functional quantum computing technology will primarily be able to process vast amounts of data faster than traditional computers. Furthermore, this advancement will accelerate machine learning algorithms, especially those requiring intensive computations, such as deep learning. It will more efficiently search parameter spaces during the training of neural networks, potentially leading to faster and more accurate learning.
The Future
Regardless of the industry under discussion, the current changes already taking place as a consequence of the development of artificial intelligence and technology force us to make a decision: either take a step to become part of the ongoing development or risk falling further behind in this „arms race”. Crop production is no exception, and before we know it, if not in our own country, then certainly other nations will be implementing, and already are implementing, the latest technologies that will allow them to gain a significant advantage.
Although most countries are not leaders in developing leading technologies, they often possess enormous intellectual potential, with many outstanding specialists who are frequently lost due to the lack of suitable conditions for their development, preventing them from realizing projects commensurate with their abilities. However, voices are increasingly saying that the current development of artificial intelligence represents a unique opportunity for many countries to at least partially change this trend.
I would like to draw attention to the situation of an individual person. It is commonly said when someone feels overwhelmed by the amount of work or needs help performing many tasks simultaneously, that we do not have a third hand and probably will not have one for a long time, although I wouldn’t bet my life on it. However, the most powerful „tool” we possess is our mind, and history has shown that gathering a large number of brilliant minds in one place can lead to incredible progress, as evidenced by the Manhattan Project, which aimed to develop the first atomic bomb. At this point, I encourage reflection on how much advantage a person who has access to the upcoming successors of ChatGPT-4 gains over those who walk a solitary path, in this case, simply not using artificial intelligence. Such a person has at their disposal a second artificial mind, which may soon even surpass the species of the naked ape, contained literally in a smartphone, computer, or tablet. AI does not tire, complain, get annoyed, need to sleep, and can work 24 hours a day at maximum capacity without ever decreasing its efficiency, while having access to all the knowledge our species possesses except for that which has not been recorded anywhere and only resides in individual human minds. Humans, as the dominant species on planet Earth, at least for some time, have the ability to issue commands to artificial intelligence, which in the right hands has capabilities almost impossible to define within any framework. There is no doubt, therefore, that the societal divide may further increase between those who, even without artificial intelligence, strived for development and now have received an additional „superpower” in their hands, and those who are only interested in entertainment, with the development of humanity not necessarily being of interest to them.
The threat from artificial intelligence is real, but the last few decades have shown that the development of weapons of mass destruction has ensured relative peace in parts of the world previously unexperienced by humans, so there is a chance that AI could also play the same role, although there is also a real possibility of exacerbating current conflicts we observe. However, in the perspective of the next few months, we can expect artificial intelligence to be responsible for significant development in all fields, and predicting in the longer term is like reading tea leaves, making no sense whatsoever. The only question each of us can ask ourselves is whether, with full awareness of the capabilities and potential of artificial intelligence and the associated risks, we would halt its development at this point.
We can only hope that the content of this article „ages well” and that it inspires work with artificial intelligence, whether someone currently wants to use it for professional work or simply to pursue their passion. I also encourage delving deeper into this topic and forming one’s own opinion, not necessarily in agreement with ours, which, just as artificial intelligence develops rapidly, can also change along with it. Ending with the words of physicist and Nobel laureate Richard P. Feynman, „We are looking for the way everything works. What makes everything work.”