Advantages of Omega Server neural network over classical neural networks
We created the Omega Server platform where any individual user or organization can experiment with the PANN neural network using their datasets; verify the unique qualities of the neural network, and solve their practical problems.

Total superiority over any existing neural network.
1. Faster
Omega Server provides unprecedented neural network computing speed.
- Neural network training speed flows hundreds and thousands of times faster compared to any of the today’s neural networks.
- Data output is dramatically faster. Real time new data processing is important for autonomous devices and systems, for example for autonomous vehicles: cars, trucks, buses, drones, ships, etc.
- Supremacy over existing neural networks in speed grows as the data volume and tasks increases.
- Dramatic reduction in effort and time required of developers resulting in faster time-to-market.
2. Smarter
Omega Server neural network architecture is closer to the natural neural networks and and enables smarter and more sophisticated AI performance.
- Dynamic changes, new data or updates at Omega Server go through additional training with little or no re-training required.
- Omega Server allows scaling of the neural network within training process as well as at the network configuration.
3. Simpler
Omega Server training algorithms are simple and reliable.
- The algorithms allow AI neural net application developers to use computing resources more effectively.
- The neural network has intrinsic security and stable reliability.
- Close to zero chance of the network ”hang-up” and “catastrophic forgetting” effect that otherwise occurs with high volume or complex neural nets.
- The network is easy to integrate into an existing AI architecture as well as to use as an addition to existing systems.
4. Smaller
Omega Server’s neural network can operate on “light” equipment and runs on small or even autonomous devices.
- Runs on “personal” sized platforms, not industrial scale platforms, for example: laptop instead of desktop, workstation instead of server, server instead of supercomputer.
- Neural Net development can be run on local infrastructure which ensures IP safety and proprietary data security and confidentiality.
- Requires commensurately low energy consumption which otherwise is often exorbitant.
- Software using our neural net can be embedded in commonplace devices and gadgets from cell phones, headsets, to Raspberry type equipment, drones and beyond.
5. Cheaper
New AI applications is available to all with Omega Server.
- Dramatically reduces of Capital and Operating cost.
- Dramatically increases developers’ productivity due to faster trial-and-error cycles due to ultra-fast neural net computation times.
- Low integration and support cost due to its simplicity.
- Affordable to individuals, start-ups and small businesses.
Test results
Comparison with classical neural networks
We have conducted extensive testing of PANN™ neural networks in comparison with other, classical, neural networks (e.g. Keras from TensorFlow by Google), using the popular (e.g. MNIST datasets among others) on the same “light” hardware.
Omega Server proofs of supremacy both in training and performance speed.
We can run more test based on any required datasets upon your request to proof our performance
More details and proof here.
Omega Server is faster
than Keras on TensorFlow by Google
Dataset | Training time | Working speed |
---|---|---|
Fashion MNIST | 22 times | 45 times |
Pima Indians diabetes | 46 times | 127 times |
Wine quality | 136 times | 101 times |
Banknote authentication | 296 times | 127 times |
Combined cycle power plant | 1,107 times | 234 times |
Boston house price | 689 times | 183 times |
Abalone | 1,251 times | 239 times |
Confirmed features and facts
Feature | Competing classical neural networks* (Google, IBM, Miscrosoft) | PANN© neural networks |
---|---|---|
Required number of network training epochs | 1,000,000 | 10 |
Training speed (7 000 images) | 12,000 s. (3.3 hours) | 4 s. |
Full task parallelization during network training | Not possible | Yes |
Online up-training of an already trained network and removal of unnecessary data | Not possible. Requires re-training | Yes |
Network size (amount of inputs and outputs) | Limited. Exponential growth of training time | Unlimited. Processing of images of any size |
Increasing network size during computation | Not possible | Yes. Networks can be expanded to any size |
IBM SPSS Statistics 22
Competing classical neural networks (Google, IBM, Miscrosoft)
GitHub
As you know, neural network developers use various databases to test their networks. The instructions and data for testing Omega Server with IRIS and MNiST databases are available on GitHub.
Geoffrey Hinton has figuratively described MNIST as the ‘Drosophila of machine learning,’ meaning that it allows machine learning researchers to study their algorithms under controlled laboratory conditions, similar to how biologists often study fruit flies.
IRIS: https://github.com/Omega-Server/p-net-comparison-iris
MNIST: https://github.com/Omega-Server/p-net-comparison