Imagine if your computer could predict the stock market or tell you which football team will win the game this weekend. What if it could look at x-rays and detect cancer, or predict crime, forecast hurricanes, and anticipate shark attacks? Up until now, the world of AI (artificial intelligence) has been limited to mathematicians, computer programmers, and engineers. There has never been an easy way for somebody who is not an AI expert to do all of this.

We spent years working with a programmer creating various neural networks, and every time it would take months of trial and error to make something that sort of worked, but did not seem ideal. What we did not realize at the time was that there are hundreds of different types of neural networks and algorithms, each having hundreds of possible variations and settings, so even when our programmer was done with the project, he had really just tested a very small fraction of the possible solutions. It would have taken him many more months to see if there were better ways of doing it, and there was still no good way to know when to stop trying.

In 2016, we realized there must be a better way to do all of this. Why settle for what might be the 25th best solution, or the 500th best solution, when we could test all the solutions at once? If, for example, you are trying to predict if Apple Computer stock is going to go up or down, it is easy to test thousands of different AI strategies on real data to see which one actually makes the most money.

So, we created a simple system where we just upload the data (stock prices, photos, medical records, etc.) and our server automatically does the rest. To make the prediction, it uses neural networks and similar leading-edge technologies. Neural networks are a type of machine learning, where the computer is programmed to "think" like the human brain does. It learns like people do, and because we have access to virtually unlimited processing power via cloud computing, we can train the neural network to be "smarter" than a human. It is like we are able to give it a lifetime's worth of expert experience in a matter of days.

**Some of the machine learning algorithms we use include:**

k-Nearest Neighbor

Naive Bayes

Classification and Regression Trees

Decision Trees and Random Forests

Discriminant Analysis

Ensemble Techniques (bagging, boosting, etc.)

Bucket of models (Cross-Validation Selection, Gating)

Stacking

Levenberg-marquardt back-propagation

Gradient descent

Gradient descent with momentum

Gradient descent with momentum and adaptive rule back-propagation

Resilient back propagation

Scaled conjugate gradient back-propagation

BFGS quasi-Newton back-propagation

One step secant method

Nonlinear Single Shot Learning Algorithm (NSLA)

Linear quantum Single Shot learning Algorithm (LSSA)

Negative Correlation Learning (NCL)

Design of Experiments (DOE)

Taguchi method

Fuzzy membership functions

Q-Learning and Recurrent Reinforcement Learning

Levenberg-Marquardt BP algorithm

Directed Artificial Bee Colony Algorithm

ANFIS networks with Quantum-behaved Particle Swarm Optimization)

Levenbergâ€“Marquardt (LM) algorithm

Hybrid of fuzzy clustering and TSK fuzzy system

TSK fuzzy system tuned by Simulated annealing improved bacterial chemotaxis optimization

Niche Genetic Algorithm (NGA)

United immune programming (IP)

Gene expression programming (GEP)

Bacterial colony radial basis function neural network (RBFNN)

Bacterial foraging optimization (BFQ)

Tabu search (TS)

Simulated annealing

Geodesic Flow Kernel (GFK)

Subspace Alignment Domain Adaptation (SADA)

Subspace Interpolation Dictionary Learning (SIDL)

**We also test different types of neural networks such as:**

Radial basis function (RBF) network

Feedforward neural network

Generalized regression neural network

Generic Algorithms (such as NEAT, MuliNEAT, HyperNEAT, and Novelty Search)

Restricted Boltzmann machine (RBM) and also convRBM Convolutional RBM

Principal component analysis (PCA)

Self-Organizing Maps

GSN (goal seeking neuron)

Fuzzy neural network (FNN)

Wavelet Neural Network

Weightless Neural Network

Recursive neural networks

Hopfield network

Bidirectional associative memory (BAM)

Elman SRN

Jordan networks

Echo state network

Liquid state machines

Neural history compressor

Long short term memory (LSTM) network

Bi-directional RNN

Continuous-time RNN

Recurrent Multi-Layer Perceptron (RMLP)

Second order RNN

Multiple timescales recurrent neural network (MTRNN) model

Pollack's sequential cascaded networks

Neural Turing Machines (NTMs)

Neural network pushdown automata (NNPDAs)

Bidirectional associative memory using Markov stepping

Probabilistic neural network (PNN)

Echo state network (ESN)

Elman Network (EN)

Higher Order Neural Network (HONN)

SETAR

Holographic associative memory

Instantaneously trained networks

Spiking neural networks

Dynamic neural networks

Cascading neural networks

Neuro-fuzzy networks

Compositional pattern-producing networks

One-shot associative memory

Hierarchical temporal memory

Markov chains

Boltzmann machines

Autoencoders

Sparse autoencoders

Variational autoencoders

Denoising autoencoders

Deep belief networks

Deep convolutional neural networks

Deconvolutional networks

Deep convolutional inverse graphics networks

Generative adversarial networks

Gated recurrent units

Neural Turing machines

Bidirectional recurrent neural networks

Bidirectional long / short term memory networks

Bidirectional gated recurrent units

Deep residual networks

Extreme learning machines

Liquid state machines

Support vector machines (SVM)

Matching Nets (for one-shot learning)

MANN (for one-shot learning)

Siamese Neural Networks (for one-shot learning)