Sunday, April 19, 2020

Machine Learning Introduction

What you read on this page:

Introduction to Machine Learning
Machine Learning History
Application of Machine Learning
Machine Learning with Python
Machine Learning with R
Machine Learning Methods

Machine Learning Software

Introduction to Machine Learning

Machine Learning (ML) is a scientific study of algorithms and statistical models that computer systems use to perform a specific task without the use of explicit instructions using patterns and inference. This is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model based on sample data, known as "training data," so that they can make predictions or decisions without being explicitly planned to do the job. Machine learning algorithms are used in a wide range of applications, including email filtering and computer vision, where it is difficult or inaccessible to develop a conventional algorithm for effective work.

Machine learning is closely related to computational statistics, which focuses on making predictions using computers. The study of mathematical optimization offers methods, theories, and areas of application to the field of machine learning. Data mining is a field of machine learning and focuses on the analysis of exploratory data through unsupervised learning. In its applications to business problems, machine learning is also known as predictive analysis.

Machine Learning is a type of AI that allows software applications to be more accurate in predicting results without planning. The basic premise of machine learning is to create an algorithm that can receive input data and use statistical analysis to predict the amount of output in an acceptable range. Machine learning algorithms are often classified as supervised or uncontrolled.

Supervised algorithms require humans to provide feedback on predictive accuracy during training, input, and output, in addition to providing feedback. After completing the tutorial, the algorithm applies what has been learned to the new data. Uncontrolled algorithms should not be trained with the desired data. Instead, they use a method called deep learning that reviews data and draws conclusions. Uncontrolled learning algorithms Supervised learning systems are used for more complex processing tasks.

The processes involved in machine learning are similar to data mining and predictive modelling. Both require data retrieval to search for patterns and adjust program activities. Many people are familiar with machine learning and shopping online. Beyond personal marketing, other uses for machine learning include fraud detection, spam filtering, network security threat detection, predictable maintenance, and news creation

Machine Learning History

Arthur Samuel, an American pioneer in computer games and artificial intelligence, coined the term "machine learning" in 1959 while at IBM. Nielson's book on machine learning research in the 1960s was about machine learning, which was more about classifying patterns by machine learning. Interest in machine learning related to pattern recognition continued in the 1970s, as explained in the 1973 book Dada and Hart. In 1981, a report was developed on the use of training strategies to teach a neural network to recognize 40 characters (26 letters, 10 digits, and 4 special characters) from a computer terminal. As a scientific endeavour, machine learning went beyond artificial intelligence. In the early days of artificial intelligence as an academic discipline, some researchers were interested in machine learning from data. They tried to approach the issue in a variety of symbolic ways, as well as what was called "neural networks." These were mostly recipients and other models, which were later identified as statistically generalized linear models for modernization.

However, the growing emphasis on a rational and knowledge-based approach makes the difference between artificial intelligence and machine learning. Possible systems were affected by theoretical and practical problems of collecting and providing information until 1980. Expert systems were used to master artificial intelligence, and the statistics were not favourable. Work continued on symbolic / knowledge-based learning within artificial intelligence and led to the invention of inductive logic, but in pattern recognition and information retrieval, statistical statistics are now higher than research outside of artificial intelligence. Neural network research was abandoned at the same time by artificial intelligence and computer science. The line was also continued as "communicative" by researchers in other disciplines, including Hopfield, Rommelhart, and Hinton, outside the AI ​​/ CS field. Their main success came in the mid-1980s with the re-creation of the background.

Machine learning, as a separate field of reorganization, flourished in the 1990s. This has changed its goal from artificial intelligence to coping with practical problems. It shifted its focus from symbolic approaches inherited from AI to methods and models borrowed from probability statistics and theory. There is also an increase in the availability of digitalized information and the possibility of its distribution


Application of Machine Learning

Machine learning is currently a key word in the world of technology, and for good reason, it's a big step forward in how to learn computers. Basically, a machine learning algorithm is given a "training set" of data, after which it is asked to use this data to answer a question. For example, you might present a set of photos to a computer, some of which say "this is a cat" and some say "this is not a cat." Then you can show a series of new photos to the computer and it will be clear which photos were of the cats.

Machine learning then adds to its training set. Any photo that highlights it - rightly or wrongly - is added to the tutorial, and the program effectively becomes "smarter" and improves its function over time. In fact, it is learning.

Data security

Malware is a big, growing problem. In 2014, Kaspersky Lab announced that it had identified 325,000 new malware cases daily. But Deep Instinct says each new malware has almost the same code as the previous version - only between 2 and 10% of cases change from repetition to repetition. Their learning model has no problem with 2-10% changes and can predict which files are malware with the highest accuracy. In other cases, machine learning algorithms can look for data access patterns in the cloud and report anomalies that could predict security breaches.

Personal security

If you have recently flown on a plane or attended a major public event, you should almost certainly wait in line for security screening. But machine learning proves that it can be used to eliminate false alarms and find things that display devices may not miss in airports, stadiums, concerts and other places in security programs. This can speed up the process considerably and ensure safer events.

Financial trade
Many people are eager to be able to predict what the stock market will do on a given day for obvious reasons. But machine learning algorithms are always coming together. Many reputable businesses use proprietary systems to predict and execute transactions with high speed and volume. Many of these rely on probabilities, but even a relatively low probability trade, with enough volume or speed, can make huge profits for businesses. When humans can handle large amounts of data or the speed at which they can do business, they cannot compete with machines.

Health Cares
Machine learning algorithms can process more information and view more patterns than their human counterparts. One study used computer-aided diagnosis (CAD) that looked at early mammography scans of women who later developed breast cancer, and computerized 52% of cancers about a year before the official diagnosis of women. Observed. In addition, machine learning can be used to understand disease risk factors in large populations. Medecision developed an algorithm that was able to identify eight variables to predict the inevitable hospitalization of diabetic patients.

Marketing personalization

The more you understand customers, the better you can serve them and the more sales you will have. This is the foundation of marketing personalization. You may have the experience of visiting an online store and looking at a product but not buying it, and then the next few days you will see digital advertising for that exact product on the web. Companies can personalize the person who receives the email, or direct letters or coupons, the person who sees which products are "recommended" and so on. , All of these types are designed to drive customers to sell with more confidence.

Detection of fraud

Machine learning is better and better at detecting possible scams in many different areas. For example, PayPal uses machine learning to combat money laundering. The company has tools that compare millions of transactions and can accurately distinguish between legitimate transactions and fraud between buyers and sellers.

Recommendations

If you're using services like Amazon or Netflix, you're probably familiar with them. Smart machine learning algorithms analyze your activity and compare it to millions of other users to determine what you would like to buy or watch in the next hour. These tips are always smarter, for example, knowing that you can buy certain items as a gift (and you don't want the item yourself) or that there may be different family members who have different TV preferences.

Search online

Perhaps the most popular use of machine learning, Google and its competitors is constantly improving what search engines understand. Every time you do a Google search, the app looks at how the results respond. If you click on the top result and stay on that web page, we can assume that you have obtained the desired information and the search has been successful. If on the other hand, click on the second page of results or a search string

Natural Language Processing (NLP)

NLP is used in a variety of exciting applications in all disciplines. Natural language machine learning algorithms can be provided to customer service representatives and guide the customer to the information they need faster. This is used in simple language for ambiguous legal translations in contracts and helps lawyers organize a large amount of information to prepare a file.

Smart cars

IBM recently surveyed senior automotive executives, and 74% expected us to see smart cars on the road by 2025. A smart car not only integrates into the Internet of Things but also learns about its owner and its environment. This may adjust the internal settings of temperature, sound, seat position, etc., and automatically fix, report, and even troubleshoot driver-based, self-driving, and provide real-time advice on traffic and road conditions.

Machine Learning with Python

Machine learning is a type of artificial intelligence (AI) that allows learning on computers without explicit planning. Machine learning focuses on developing computer programs that can change when exposed to new data. In this article, we'll look at the basics of machine learning and how to implement a simple machine learning algorithm using Python.

Setting the environment

The Python Association has created many modules to help programmers implement machine learning. In this article, we will use numpy, scipy and scikit-Learn modules. We can install them using the cmd command:

pip install numpy scipy scikit-learn


Machine Learning with R

R is one of the main languages ​​for science data. It provides excellent visualization features, which are essential for data exploration before being sent to any automated learning as well as for evaluating learning algorithm results. Many R packages are available for machine learning on the shelf, and many modern methods of statistical learning are implemented in the R section as part of their development.


  • R is used by the best data scientists in the world. In surveys of the Kaggle (competitive machine learning platform), R is almost the most widely used machine learning tool. When professional car learning specialists were examined in 2015, R was again the most popular machine learning tool.
  • R is powerful because of the breadth of techniques it offers. Any technique you can use to analyze, visualize, sample, monitor, and evaluate model data is presented in R. This platform has more techniques than any other method.
  • R is the most modern because it is used by academics. One of the reasons why R techniques are so popular is because academics who are developing new algorithms are developing them in R and releasing them as R packets. This means that you can access advanced algorithms in R before other operating systems. It also means that you can access some R algorithms as long as someone transfers them to other operating systems.
  • R is free because it is open-source software. You can download it for free right now and it runs on any platform.



Machine Learning Methods


  • Machine learning algorithms are often classified as supervisors with or without supervision.
  • Supervised machine learning algorithms can use new data to predict future events using what they have learned in the past, using tagged samples. Starting with the analysis of well-known training data sets, the learning algorithm produces an inferential function to make predictions about the output values. This system is able to provide goals for each new input after sufficient training. The learning algorithm can also compare its output with the correct and intended output and find errors to correct the model accordingly.
  • In contrast, unsupervised machine learning algorithms are used when the information used for training is neither classified nor labelled. Unsupervised learning studies how systems can infer a function to describe the hidden structure of non-tagged data. The system does not specify the appropriate output, but explores the data and can infer to describe the hidden structures of the data without the tagged data.
  • Semi-supervised machine learning algorithms exist in a place between supervised and unsupervised learning because they use both tagged and unlabeled data for training. Typically a small amount of data is labelled and a large amount of data is unlabeled. Systems that use this method can significantly improve learning accuracy. Usually, semi-supervised learning is chosen when the data labelled requires skilled and relevant resources to be able to teach it. Otherwise, access to unmarked data generally does not require additional resources.
  • Reinforcement machine learning algorithms are a learning method that interacts with the environment by generating actions and detects errors or rewards. Exploration and error detection and delay reward are the most important features of reinforcement learning. This method allows machines and software representatives to automatically determine the ideal behaviour in a particular field to maximize their performance. Simple reward feedback The agent needs to learn which action is best. This is known as the amplification signal.

Machine learning makes it possible to analyze large amounts of data. Although it usually offers faster and more accurate results in order to identify profitable opportunities or dangerous risks, it may take extra time and resources to properly train them. Combining machine learning with intelligence


Machine Learning Software

TensorFlow
TensorFlow is an open-source software library for data planning and programming that differs from a wide range of tasks. It is a mathematical library and is also used for machine learning programs such as Neural networks. The Flow Tensor is used for research and production at Google. It was released in November 2015 under the Apache 2.0 license. TensorFlow is a software library used for numerical computation with data flow graphs that allows users to express desired calculations as a data flow graph. The nodes in this graph show the mathematical operation, while the edge shows the data from one node to another.

Jupyter Notebook
Jupyter Notebook is an open-source web application that allows you to create and share documents containing live code, equations, illustrations, and narrative text. Its application is cleaning and changing data, numerical simulation, statistical modelling, data visualization, machine learning and so on.
The Jupyter project is a non-profit, open-source project that was born in 2014 from the IPython project because it evolved to support interactive data science and scientific computing in all programming languages. Jupyter will always be 100% open source software and is available to everyone for free and has been modified and released under BSD licensing conditions. Jupyter was developed through the Jupyter community at GitHub.

Watson AI
The growth and development of IBM Watson began with a computer-based Q&A contest, a set of AI-based Applied Programming Interfaces (APIs) available on IBM's cloud environment. These Watson application programming interfaces can receive, understand, analyze, interact with, learn, and respond to all kinds of data in a variety of natural ways - on a larger scale, all of which are allowed. Allows business processes and applications to be reinterpreted


2 comments:

  1. Deep machine learning is the way to understand machine learning in a better way. By this, you can verify machine learning solution providers ' quality in many different terms.

    ReplyDelete