ccclyu awesome-deeplogic: A collection of papers of neural-symbolic AI mainly focus on NLP applications

ccclyu awesome-deeplogic: A collection of papers of neural-symbolic AI mainly focus on NLP applications

For visual processing, each “object/symbol” can explicitly package common properties of visual objects like its position, pose, scale, probability of being an object, pointers to parts, etc., providing a full spectrum of interpretable visual knowledge throughout all layers. It achieves a form of “symbolic disentanglement”, offering one solution to the important problem of disentangled representations and invariance. Basic computations of the network include predicting high-level objects and their properties from low-level objects and binding/aggregating relevant objects together.


For example, consider the scenario of an autonomous vehicle driving through a residential neighborhood on a Saturday afternoon. What is the probability that a child is nearby, perhaps chasing after the ball? This prediction task requires knowledge of the scene that is out of scope for traditional computer vision techniques. More specifically, it requires an understanding of the semantic relations between the various aspects of a scene – e.g., that the ball is a preferred toy of children, and that children often live and play in residential neighborhoods. Knowledge completion enables this type of prediction with high confidence, given that such relational knowledge is often encoded in KGs and may subsequently be translated into embeddings. The topic of neuro-symbolic AI has garnered much interest over the last several years, including at Bosch where researchers across the globe are focusing on these methods.

The current state of symbolic AI

These computations operate at a more fundamental symbolic ai than convolutions, capturing convolution as a special case while being significantly more general than it. All operations are executed in an input-driven fashion, thus sparsity and dynamic computation per sample are naturally supported, complementing recent popular ideas of dynamic networks and may enable new types of hardware accelerations. We experimentally show on CIFAR-10 that it can perform flexible visual processing, rivaling the performance of ConvNet, but without using any convolution. Furthermore, it can generalize to novel rotations of images that it was not trained for. In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.

AI & Funds #1 – investment discretion – Lexology

AI & Funds #1 – investment discretion.

Posted: Wed, 08 Feb 2023 08:00:00 GMT [source]

In a certain sense, every abstract category, like chair, asserts an analogy between all the disparate objects called chairs, and we transfer our knowledge about one chair to another with the help of the symbol. The technology actually dates back to the 1950s, says’s Luca Scagliarini, but was considered old-fashioned by the 1990s when demand for procedural knowledge of sensory and motor processes was all the rage. Now that AI is tasked with higher-order systems and data management, the capability to engage in logical thinking and knowledge representation is cool again. Deep neural networks are also very suitable for reinforcement learning, AI models that develop their behavior through numerous trial and error. This is the kind of AI that masters complicated games such as Go, StarCraft, and Dota. In 2016, Yann LeCun, Bengio, and Hinton wrote a manifesto for deep learning in one of science’s most important journals, Nature.

What is boosting in machine learning?

Sepp Hochreiter — co-creator of LSTMs, one of the leading DL architectures for learning sequences — did the same, writing “The most promising approach to a broad AI is a neuro-symbolic AI … a bilateral AI that combines methods from symbolic and sub-symbolic AI” in April. As this was going to press I discovered that Jürgen Schmidhuber’s AI company NNAISENSE revolves around a rich mix of symbols and deep learning. Even Bengio has been busy in recent years trying to get Deep Learning to do “System 2” cognition — a project that looks suspiciously like trying to implement the kinds of reasoning and abstraction that made many of us over the decades desire symbols in the first place. Neuro-Symbolic artificial intelligence uses symbolic reasoning along with the deep learning neural network architecture that makes the entire system better than contemporary artificial intelligence technology.

In case of a problem, developers can follow its behavior line by line and investigate errors down to the machine instruction where they occurred. Symbolic AI is an approach that trains Artificial Intelligence the same way human brain learns. It learns to understand the world by forming internal symbolic representations of its “world”. Current systems, 20 years after “The Algebraic Mind,” still fail to reliably extract symbolic operations (e.g. multiplication), even in the face of immense data sets and training. This is an exceptionally bold claim; but now is not the time to ask how true it is.


As limitations with weak, domain-independent methods became more and more apparent, researchers from all three traditions began to build knowledge into AI applications. The knowledge revolution was driven by the realization that knowledge underlies high-performance, domain-specific AI applications. Our strongest difference seems to be in the amount of innate structure that we think we will be required and of how much importance we assign to leveraging existing knowledge. I would like to leverage as much existing knowledge as possible, whereas he would prefer that his systems reinvent as much as possible from scratch. But whatever new ideas are added in will, by definition, have to be part of the innate foundation for acquiring symbol manipulation that current systems lack. In the context of autonomous driving, knowledge completion with KGEs can be used to predict entities in driving scenes that may have been missed by purely data-driven techniques.

Why symbolic AI failed?

Since symbolic AI can't learn by itself, developers had to feed it with data and rules continuously. They also found out that the more they feed the machine, the more inaccurate its results became. As such, they explored AI subsets that focus on teaching machines to learn on their own via deep learning.

Symbolic AI simply means implanting human thoughts, reasoning, and behavior into a computer program. Symbols and rules are the foundation of human intellect and continuously encapsulate knowledge. Symbolic AI copies this methodology to express human knowledge through user-friendly rules and symbols.

Getting AI to reason: using neuro-symbolic AI for knowledge-based question answering

The report stated that all of the problems being worked on in AI would be better handled by researchers from other disciplines—such as applied mathematics. The report also claimed that AI successes on toy problems could never scale to real-world applications due to combinatorial explosion. Because it is a rule-based reasoning system, Symbolic AI also enables its developers to easily visualize the logic behind its decisions.

  • The early pioneers of AI believed that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Therefore, symbolic AI took center stage and became the focus of research projects.
  • A different way to create AI was to build machines that have a mind of its own.
  • For example it introduced metaclasses and, along with Flavors and CommonLoops, influenced the Common Lisp Object System, or , that is now part of Common Lisp, the current standard Lisp dialect.
  • In this line of effort, deep learning systems are trained to solve problems such as term rewriting, planning, elementary algebra, logical deduction or abduction or rule learning.
  • The semantic layer is not contained in the data, but in the process of acquiring this data, so the particular learning approach of current deep learning methods, focusing on benchmarks and batch processing, cannot capture this important dimension.
  • They can also be used to describe other symbols (a cat with fluffy ears, a red carpet, etc.).

اترك رد

لن يتم نشر عنوان بريدك الإلكتروني. الحقول المطلوبة تصنع.

Back to top