Back to the talks Previous by time: Emacs was async before async was cool Next by time: The Wheels on D-Bus Track: General

GRAIL---A Generalized Representation and Aggregation of Information Layers

Sameer Pradhan (he/him)

The following image shows where the talk is in the schedule for Sun 2022-12-04. Solid lines show talks with Q&A via BigBlueButton. Dashed lines show talks with Q&A via IRC or Etherpad.

Format: 20-min talk followed by live Q&A (https://emacsconf.org/current/grail/room)
Etherpad: https://pad.emacsconf.org/2022-grail
Discuss on IRC: #emacsconf-gen
Status: Waiting for video from speaker

Times in different timezones:
Sunday, Dec 4 2022, ~2:35 PM - 2:55 PM EST (US/Eastern)
which is the same as:
Sunday, Dec 4 2022, ~1:35 PM - 1:55 PM CST (US/Central)
Sunday, Dec 4 2022, ~12:35 PM - 12:55 PM MST (US/Mountain)
Sunday, Dec 4 2022, ~11:35 AM - 11:55 AM PST (US/Pacific)
Sunday, Dec 4 2022, ~7:35 PM - 7:55 PM UTC
Sunday, Dec 4 2022, ~8:35 PM - 8:55 PM CET (Europe/Paris)
Sunday, Dec 4 2022, ~9:35 PM - 9:55 PM EET (Europe/Athens)
Monday, Dec 5 2022, ~1:05 AM - 1:25 AM IST (Asia/Kolkata)
Monday, Dec 5 2022, ~3:35 AM - 3:55 AM +08 (Asia/Singapore)
Monday, Dec 5 2022, ~4:35 AM - 4:55 AM JST (Asia/Tokyo)
Find out how to watch and participate

Description

The human brain receives various signals that it assimilates (filters, splices, corrects, etc.) to build a syntactic structure and its semantic interpretation. This is a complex process that enables human communication. The field of artificial intelligence (AI) is devoted to studying how we generate symbols and derive meaning from such signals and to building predictive models that allow effective human-computer interaction.

For the purpose of this talk we will limit the scope of signals to the domain to language—text and speech. Computational Linguistics (CL), a.k.a. Natural Language Processing (NLP), is a sub-area of AI that tries to interpret them. It involves modeling and predicting complex linguistic structures from these signals. These models tend to rely heavily on a large amount of ``raw'' (naturally occurring) data and a varying amount of (manually) enriched data, commonly known as ``annotations''. The models are only as good as the quality of the annotations. Owing to the complex and numerous nature of linguistic phenomena, a divide and conquer approach is common. The upside is that it allows one to focus on one, or few, related linguistic phenomena. The downside is that the universe of these phenomena keeps expanding as language is context sensitive and evolves over time. For example, depending on the context, the word ``bank'' can refer to a financial institution, or the rising ground surrounding a lake, or something else. The verb ``google'' did not exist before the company came into being.

Manually annotating data can be a very task specific, labor intensive, endeavor. Owing to this, advances in multiple modalities have happened in silos until recently. Recent advances in computer hardware and machine learning algorithms have opened doors to interpretation of multimodal data. However, the need to piece together such related but disjoint predictions poses a huge challenge.

This brings us to the two questions that we will try to address in this talk:

  1. How can we come up with a unified representation of data and annotations that encompasses arbitrary levels of linguistic information? and,

  2. What role might Emacs play in this process?

Emacs provides a rich environment for editing and manipulating recursive embedded structures found in programming languages. Its view of text, however, is more or less linear–strings broken into words, strings ended by periods, strings identified using delimiters, etc. It does not assume embedded or recursive structure in text. However, the process of interpreting natural language involves operating on such structures. What if we could adapt Emacs to manipulate rich structures derived from text? Unlike programming languages, which are designed to be parsed and interpreted deterministically, interpretation of statements in natural languages has to frequently deal with phenomena such as ambiguity, inconsistency, incompleteness, etc. and can get quite complex.

We present an architecture (GRAIL) which utilizes the capabilities of Emacs to allow the representation and aggregation of such rich structures in a systematic fashion. Our approach is not tied to Emacs, but uses its many built-in capabilities for creating and evaluating solution prototypes.

Questions or comments? Please e-mail emacsconf-org-private@gnu.org

Back to the talks Previous by time: Emacs was async before async was cool Next by time: The Wheels on D-Bus Track: General

CategoryLinguistics