Main application o

Artificial Intelligence has marked its presence in almost every industry and walks of life. It has not only been reducing the human interventions in various operations but also helping humans to do their job better.

Fields like Social Media, Consumer Electronics, Robotics, Travel and Transportation, Finance, Healthcare, Security, Surveillance, E-commerce, etc. are already benefiting from AI.

Digital Marketing and AI go hand-in-hand. In digital marketing, there is a massive requirement to process tons of data. Artificial Intelligence helps digital marketers to process data faster, which allows them to create digital strategies more efficiently.

The capabilities of AI in Digital Marketing are massive. Below are the ten ways AI is revolutionizing Digital Marketing.

#1 Online Advertising

Online advertising is one of the most crucial elements of digital marketing. It helps businesses to reach out to their target audience quickly.

A majority of online ads that we see today is run by a very complicated delivery system powered by Artificial Intelligence, which is called “Programmatic Advertising.”

The program facilitates the buying and selling of ad spaces. It conducts auctions where these ad spaces are sold and bought under milliseconds.

Before AI marketers were required to do intense research to figure out the right platform to market their business. Today, this research is done by AI, which reduces the load from marketers and allows them to focus on other important aspects of the digital strategy.

#2 Personalized User Experience

“Personalization is the new cool.” According to Evergage, 96% of marketers agree that personalization is the key to deliver a fantastic customer experience.

AI has made it possible to figure out the likes and dislikes, behavior patterns, interests, and activities of millions of people every day. It does it by collecting and analyzing user data while considering physiographic, demographics, devices, geographics, and more.

Higher the personalization better are the chances of conversion. Moreover, AI also helps to build better relationships with customers.

#3 AI-powered Chatbots

People often get confused between a standard Chatbot and an AI-powered Chatbot. AI-powered Chatbots are an advanced version of standard Chatbots.

Unlike standard Chatbots, AI-powered Chatbots humanely talk to users. Another advantage of these Chatbots is that they don’t get short-tempered when a user asks any question.

AI-powered Chatbots can respond to multiple customer queries at the same time. The automated response is so personalized that it can pursue any user to buy your products or services.

#4 Predictive Analysis

We know that AI is good at crunching numbers and analyzing data. AI uses statistical models and software to predict a customer’s future actions by studying their past behavior and characteristics.

This way, AI helps marketers to know more about their customers, such as what price do they expect of a particular product. Based on the data, AI can also predict what kind of features do customers expect in the product upgrade.

Marketers can leverage this data to create taglines and run campaigns such that it attracts more customers and increases the chances of conversion.

#5 Web Designing

Developing a website without knowledge of HTML, CSS, and JavaScript seems like an impossible thing. But, AI has made it possible. Popular website builders like Wix uses AI to build websites.

All we need to feed in is the content, call-to-action, images, and page layout. There you go, your professional website is ready to roll. Wit.ai and Dialogflow are free AI services offered by Facebook and Google, respectively, which developers can use to build websites.

#6 Content Generation

You might be wondering how it is possible? However, it’s true. AI can also generate content for your website, product, service, etc. Moreover, it can also write the content of a movie review for a news website.

By processing several TB of data and analyzing thousands of content, it can generate human-like content that helps you to engage users. Popular publications like Associated Press and Forbs are already making use of content generating tools like WordsmithQuill, and Articoolo.

#7 Content Curation

It is a proven truth that out of all marketing strategies, content marketing offers the highest return on investment. Two major aspects of content marketing are content generation and content curation.

Content is usually generated by getting inspired by other similar pieces of content. AI can also be used to search the content relevant to our topic of interest.

Tools like Concured and BuzzSumo are AI-enabled. They help you to search content that is currently trending and accordingly plan the future content, rebuild the existing content, schedule it, and then distribute it.

Netflix’s movie/tv show recommendations and Amazon’s product recommendations are great examples of AI-based content curation. It provides personalized user experience to its customers by showing them relevant content based on their interests.

Content generation is tough. According to Adobe, 47% of marketers believe that generating content on a large scale is tough. AI-enabled content curation can help them to produce content on a large scale faster.

#8 Email Marketing Campaigns

In this day and age of auto-generated emails, people are expecting personalized/tailor-made emails that are relevant to them. AI can help you send a customized email for your email marketing campaigns by analyzing user behavior and preferences.

AI analyzes thousands of GB of data to get the right title and subject line that grabs customers’ attention. It can also find the right time, day, and frequency to shoot the email, which further increases the chances of conversion.

#9 Voice Search Optimization

According to Gary Vaynerchuck, 1 in every 4 Google searches done via mobile is voice searches, which marks the importance and necessity of voice search optimization.

A marketer must stay aware of these revolutionary changes. Tools such as Google’s RankBrain, helps you to optimize your website for voice search. It will also help you increase the organic traffic coming from a regular search

#10 E-commerce

Artificial Intelligence, if appropriately used, can bring massive impact to e-commerce business owners. Starting from building website and website content to providing product recommendations to managing inventory and providing customer support, AI can do everything.

AI also plays a crucial role in e-commerce sales forecasting, doing competitor market research, looking for customer search trends, and more.

Final Thoughts

Marketers need to understand the importance of AI in Marketing since it is going to be the next big thing in Digital Marketing and even sales.

The quick rise of AI development companies is evidence that they are going to play a huge role in the software industry, similar to what mobile application development services played a decade back.

Artificial Intelligence in Digital Marketing is in a growing phase. It means that its effectiveness and efficiency are going to increase over time. If there is any right time to adapt AI for creating effective marketing strategies, then it is NOW.

History of Artificial Intelligence – AI of the past, present and the future!

History of AI

It was 1880’s when a great scientist came up with this term and since then a lot of revolutions came in the field which helped the business and the economy to boom. Wait! If you believe in the above lines, you are lost. Rome wasn’t built in a day, and so is AI.

Let’s start by discussing the history of Artificial Intelligence.

History of AI

The first patent for the invention of the telephone happened in 1876 and AI was introduced at a much later stage.

In true terms, the field of AI research was founded at a workshop held on the campus of Dartmouth College during the summer of 1956. At that time, it was predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true.

History of Artificial Intelligence

Investment and interest in AI rose in the first decades of the 21st century. From that time, Machine Learning was successfully applied to many problems in academia and industry due to the presence of a powerful computer

1950 – The time when it all started

So now this concept has been around for decades but, until 1950, people were unaware of the term. John McCarthy who is known as the founder of Artificial Intelligence introduced the term ‘Artificial Intelligence’ in the year 1955. McCarthy along with Alan Turing, Allen Newell, Herbert A. Simon, and Marvin Minsky is known as the founding fathers of AI. Alan suggested that if humans use available information, as well as reason, to solve problems to make decisions – then why can’t it be done with the help of machines?

1974 – Computers flourished!

Gradually with time, the wave of computers started. With time, they became faster, more affordable and able to store more information. The best part was that they could think abstractly, able to self-recognize and achieved Natural Language Processing

1980 – The year of AI

In 1980, AI research fired back up with an expansion of funds and algorithmic tools. With deep learning techniques, the computer learned with the user experience.

2000’s – Landed to the Landmark

After all the failed attempts, the technology was successfully established but, until it was in the 2000s that the landmark goals were achieved. At that time, AI thrived despite a lack of government funds and public attention.

Any queries in the History of AI article till now? Enter your doubts in the comment section below.

History of Artificial Intelligence - Queensland Brain Institute ...

Facts aren’t Artificial

We know that technology is evolving day by day and AI is reaching new heights. Even the research is growing constantly and continues to grow. In the last five years, AI research has grown by 12.9% annually worldwide, hence it can be seen that the rate at which AI is growing is truly commendable.

It is predicted that in the coming four to five years, China will become the biggest global source of Artificial Intelligence and will take over the United States. The not so shocking thing is that Europe will be on the number one spot. It is the most diverse region with high levels of international collaboration. The third position will be held by India, which is humongous in terms of AI research output

Present Condition of AI

We all might be aware of the present situation, and what value does AI holds in our lives. AI collects and organizes large amounts of information to make insights and guesses that are beyond the human capabilities of manual processing. Amazing! Isn’t it?

With its increasing organizational efficiencies, the likelihood of a mistake and detected irregular patterns is reduced. So, if we talk about spam or fraud, or the warning it provides to business in real-time about suspicious activity, a lot has been safeguarded already.

Cost reduction has helped the business to increase their share of profits. For example – “training” the machines to handle customer support calls and replacing many jobs in that way.

What Is The Importance Of Artificial Intelligence (AI)

Is Future in safe hands or not?

Next, in AI history article, we will see whether the future is in safe hands or not. If AI is the future, then there is absolutely no doubt. Starting from healthcare, education, e-commerce to water power, electricity and assembly lines, everything is automated. Such machines and technologies have helped humans to achieve efficiency and effectiveness.

There is a big shift in a way we live, work and relate to one another due to the adoption of cyber-physical systems, Internet of Things and the Internet of Systems.”

With Self Aware Systems, human-like intelligence has been developed. With Artificial Superintelligence and AI algorithms, the machines are capable of outperforming the smartest of humans in every single domain.

Future of Artificial Intelligence

We all know what the future is, but there is a bit of uncertainty of how it will be presented and what changes will it do to the generations. Will it have a positive impact or will there be some negative repercussions?

Until now, there have also been some Artificial Intelligence systems that were specialists, e.g. AlphaGo (Zero), that mastered Go and were able to outperform human Grandmaster player. Super intelligence takes Artificial Intelligence to the next stage. super intelligence refers to smarter than a human platform that can outperform a human in any intellectual task

The question, when there will be AGI and when there will be superintelligence, is, of course, difficult to answer (as nearly all answers refer to future prediction). Nick Bostrom, who is an AI researcher at Oxford University, evaluates this question in a paper.

You must have heard of the warnings given by Stephen Hawking, Elon Musk, and Bill Gates. The predictions are negative but, I will eagerly be looking to see what researchers produce in this field in the future. Be it Artificial Intelligence, Super intelligence – AI takeover?

Summary

Over the years, a significant resurgence is seen in AI technologies. AI has become commonplace in every aspect of life – like self-driving cars, more accurate weather predictions, or early-stage health diagnosis’, just to name a few and all we can do is to welcome the future with open hands. AI is being taught to perform tasks that require human thinking and reasoning

Workplaces and organizations are becoming “smarter” and more efficient as machines and humans are starting to work together. With time, we use connected devices to enhance our supply chains and warehouses.

With smarter technologies in the workplace, machines are interacting, visualizing the entire production chain and is making the decisions autonomously. The industrial revolution has made a lot of advancements in business possible and clearly there is a long way to go.

Any other point which you would like to add in the History of AI article by DataFlair? Share your thoughts in the comment section.



Artificial intelligence

In computer scienceartificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals. Leading AI textbooks define the field as the study of “intelligent agents“: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] Colloquially, the term “artificial intelligence” is often used to describe machines (or computers) that mimic “cognitive” 

functions that humans associate with the human mind, such as “learning” and “problem solving”.[2]

As machines become increasingly capable, tasks considered to require “intelligence” are often removed from the definition of AI, a phenomenon known as the AI effect.[3] A quip in Tesler’s Theorem says “AI is whatever hasn’t been done yet.”[4] For instance, optical character recognition is frequently excluded from things considered to be AI,[5] having become a routine technology.[6] Modern machine capabilities generally classified as AI include successfully understanding human speech,[7] competing at the highest level in strategic game systems (such as chess and Go),[8] autonomously operating cars, intelligent routing in content delivery networks, and military simulations.

Artificial intelligence was founded as an academic discipline in 1955, and in the years since has experienced several waves of optimism,[9][10] followed by disappointment and the loss of funding (known as an “AI winter“),[11][12] followed by new approaches, success and renewed funding.[10][13] For most of its history, AI research has been divided into subfields that often fail to communicate with each other.[14] These sub-fields are based on technical considerations, such as particular goals (e.g. “robotics” or “machine learning“),[15] the use of particular tools (“logic” or artificial neural networks), or deep philosophical differences.[16][17][18] Subfields have also been based on social factors (particular institutions or the work of particular researchers).[14]Artificial intelligence was founded as an academic discipline in 1955, and in the years since has experienced several waves of optimism,[9][10] followed by disappointment and the loss of funding (known as an “AI winter“),[11][12] followed by new approaches, success and renewed funding.[10][13] For most of its history, AI research has been divided into subfields that often fail to communicate with each other.[14] These sub-fields are based on technical considerations, such as particular goals (e.g. “robotics” or “machine learning“),[15] the use of particular tools (“logic” or artificial neural networks), or deep philosophical differences.[16][17][18] Subfields have also been based on social factors (particular institutions or the work of particular researchers).[14]

The traditional problems (or goals) of AI research include reasoningknowledge representationplanninglearningnatural language processingperception and the ability to move and manipulate objects.[15] General intelligence is among the field’s long-term goals.[19] Approaches include statistical methodscomputational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimizationartificial neural networks, and methods based on statistics, probability and economics. The AI field draws upon computer scienceinformation engineeringmathematicspsychologylinguisticsphilosophy, and many other fields.

The field was founded on the assumption that human intelligence “can be so precisely described that a machine can be made to simulate it”.[20] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence. These issues have been explored by mythfiction and philosophy since antiquity.[21] Some people also consider AI to be a danger to humanity if it progresses unabated.[22][23] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[24]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.[25][13]


HistoryEdit

Main articles: History of artificial intelligence and Timeline of artificial intelligence

Silver didrachma from Crete depicting Talos, an ancient mythical automaton with artificial intelligence

Thought-capable artificial beings appeared as storytelling devices in antiquity,[26] and have been common in fiction, as in Mary Shelley‘s Frankenstein or Karel Čapek‘s R.U.R. (Rossum’s Universal Robots).[27] These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence.[21]The study of mechanical or “formal” reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing‘s theory of computation, which suggested that a machine, by shuffling symbols as simple as “0” and “1”, could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the Church–Turing thesis.[28] Along with concurrent discoveries in neurobiologyinformation theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. Turing proposed changing the question from whether a machine was intelligent, to “whether or not it is possible for machinery to show intelligent behaviour”.[29] The first work that is now generally recognized as AI was McCullouch and Pitts‘ 1943 formal design for Turing-complete “artificial neurons”.[30]

Challenges

The cognitive capabilities of current architectures are very limited, using only a simplified version of what intelligence is really capable of. For instance, the human mind has come up with ways to reason beyond measure and logical explanations to different occurrences in life. What would have been otherwise straightforward, an equivalently difficult problem may be challenging to solve computationally as opposed to using the human mind. This gives rise to two classes of models: structuralist and functionalist. The structural models aim to loosely mimic the basic intelligence operations of the mind such as reasoning and logic. The functional model refers to the correlating data to its computed counterpart.[84]

These algorithms proved to be insufficient for solving large reasoning problems because they experienced a “combinatorial explosion”: they became exponentially slower as the problems grew larger.[66] In fact, even humans rarely use the step-by-step deduction that early AI research was able to model. They solve most of their problems using fast, intuitive judgments.[87]

The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.

Reasoning, problem solvingEdit

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[85] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[86]

These algorithms proved to be insufficient for solving large reasoning problems because they experienced a “combinatorial explosion”: they became exponentially slower as the problems grew larger.[66] In fact, even humans rarely use the step-by-step deduction that early AI research was able to model. They solve most of their problems using fast, intuitive judgments.[87]

Knowledge representationEdit

An ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.Main articles: Knowledge representation and Commonsense knowledge

Main articles: Knowledge representation and Commonsense knowledge

Knowledge representation[88] and knowledge engineering[89] are central to classical AI research. Some “expert systems” attempt to gather together explicit knowledge possessed by experts in some narrow domain. In addition, some projects attempt to gather the “commonsense knowledge” known to the average person into a database containing extensive knowledge about the world. Among the things a comprehensive commonsense knowledge base would contain are: objects, properties, categories and relations between objects;[90] situations, events, states and time;[91] causes and effects;[92] knowledge about knowledge (what we know about what other people know);[93] and many other, less well researched domains. A representation of “what exists” is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.[94] The most general ontologies are called upper ontologies, which attempt to provide a foundation for all other knowledge[95] by acting as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations can be used in content-based indexing and retrieval,[96] scene interpretation,[97] clinical decision support,[98] knowledge discovery (mining “interesting” and actionable inferences from large databases),[99] and other areas.[100]

Among the most difficult problems in knowledge representation are:Default reasoning and the qualification problemMany of the things people know take the form of “working assumptions”. For example, if a bird comes up in conversation, people typically picture an animal that is fist-sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969[101] as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.[102]

Planning

hierarchical control system is a form of control system in which a set of devices and governing software is arranged in a hierarchy.Main article: Automated planning and scheduling

Intelligent agents must be able to set goals and achieve them.[107] They need a way to visualize the future—a representation of the state of the world and be able to make predictions about how their actions will change it—and be able to make choices that maximize the utility (or “value”) of available choices.[108]

In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[109] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that can not only assess its environment and make predictions but also evaluate its predictions and adapt based on its assessment.[110]

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[111]

Main article: Machine learning

Machine learning (ML), a fundamental concept of AI research since the field’s inception,[112] is the study of computer algorithms that improve automatically through experience.[113][114]

Learning

Unsupervised learning is the ability to find patterns in a stream of input, without requiring a human to label the inputs first. Supervised learning includes both classification and numerical regression, which requires a human to label the input data first. Classification is used to determine what category something belongs in, and occurs after a program sees a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change.[114] Both classifiers and regression learners can be viewed as “function approximators” trying to learn an unknown (possibly implicit) function; for example, a spam classifier can be viewed as learning a function that maps from the text of an email to one of two categories, “spam” or “not spam”. Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization.[115] In reinforcement learning[116] the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.

Natural language processing

parse tree represents the syntactic structure of a sentence according to some formal grammar.Main article: Natural language processing

Natural language processing[117] (NLP) gives machines the ability to read and understand human language. A sufficiently powerful natural language processing system would enable natural-language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrievaltext miningquestion answering[118] and machine translation.[119] Many current approaches use word co-occurrence frequencies to construct syntactic representations of text. “Keyword spotting” strategies for search are popular and scalable but dumb; a search query for “dog” might only match documents with the literal word “dog” and miss a document with the word “poodle”. “Lexical affinity” strategies use the occurrence of words such as “accident” to assess the sentiment of a document. Modern statistical NLP approaches can combine all these strategies as well as others, and often achieve acceptable accuracy at the page or paragraph level, but continue to lack the semantic understanding required to classify isolated sentences well. Besides the usual difficulties with encoding semantic commonsense knowledge, existing semantic NLP sometimes scales too poorly to be viable in business applications. Beyond semantic NLP, the ultimate goal of “narrative” NLP is to embody a full understanding of commonsense reasoning.[120]

ApplicationsEdit

An automated online assistant providing customer service on a web page – one of many very primitive applications of artificial intelligenceMain article: Applications of artificial intelligence

AI is relevant to any intellectual task.[278] Modern artificial intelligence techniques are pervasive[279] and are too numerous to list here. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.[280]

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays,[281] prediction of judicial decisions,[282] targeting online advertisements, [278][283][284] and energy storage[285]

With social media sites overtaking TV as a source for news for young people and news organizations increasingly reliant on social media platforms for generating distribution,[286] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[287]

AI can also produce Deepfakes, a content-altering technology. ZDNet reports, “It presents something that did not actually occur,” Though 88% of Americans believe Deepfakes can cause more harm than good, only 47% of them believe they can be targeted. The boom of election year also opens public discourse to threats of videos of falsified politician media.[288]

Healthcare

Main article: Artificial intelligence in healthcare

A patient-side surgical arm of Da Vinci Surgical System

AI in healthcare is often used for classification, whether to automate initial evaluation of a CT scan or EKG or to identify high-risk patients for population health. The breadth of applications is rapidly increasing. As an example, AI is being applied to the high-cost problem of dosage issues—where findings suggested that AI could save $16 billion. In 2016, a groundbreaking study in California found that a mathematical formula developed with the help of AI correctly determined the accurate dose of immunosuppressant drugs to give to organ patients.[289]

X-ray of a hand, with automatic calculation of bone age by computer software


Economics

Artificial intelligence is assisting doctors. According to Bloomberg Technology, Microsoft has developed AI to help doctors find the right treatments for cancer.[290] There is a great amount of research and drugs developed relating to cancer. In detail, there are more than 800 medicines and vaccines to treat cancer. This negatively affects the doctors, because there are too many options to choose from, making it more difficult to choose the right drugs for the patients. Microsoft is working on a project to develop a machine called “Hanover”[citation needed]. Its goal is to memorize all the papers necessary to cancer and help predict which combinations of drugs will be most effective for each patient. One project that is being worked on at the moment is fighting myeloid leukemia, a fatal cancer where the treatment has not improved in decades. Another study was reported to have found that artificial intelligence was as good as trained doctors in identifying skin cancers.[291] Another study is using artificial intelligence to try to monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[292] One study was done with transfer learning, the machine performed a diagnosis similarly to a well-trained ophthalmologist, and could generate a decision within 30 seconds on whether or not the patient should be referred for treatment, with more than 95% accuracy.[293]

The long-term economic effects of AI are uncertain. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit, if productivity gains are redistributed.[400] A February 2020 European Union white paper on artificial intelligence advocated for artificial intelligence for economic benefits, including “improving healthcare (e.g. making diagnosis more precise, enabling better prevention of diseases), increasing the efficiency of farming, contributing to climate change mitigation and adaptation, [and] improving the efficiency of production systems through predictive maintenance”, while acknowledging potential risks.[279]