AI Panel Discussion

Join Mike Miller and I as we discuss A.I. solutions which are commercially ready to solve real world problems.

Press

Keynote Talks

Healthcare & Generative AI
The Main Scoop, NYC 2023

Scaling ML Products
Product School, NYC 2022


Roche Keynote on AI
Roche Startup Day, Basel 2021

From data to products
Product Faculty, NYC 2020

Product Faculty TalK with Moe Ali

WPI Alumni Panel Discussion
Worcester Polytechnic Institute, MA 2019

Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.

Clinical NLP
AIMed, CA 2016

Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.

Scaling NLU with AI
Data Council, NYC 2014

Natural language is a pervasive human skill not yet fully achievable by automated computing systems. The main challenge is understanding how to computationally model both the depth and the breadth of natural languages. In this talk, Andreea Bodnari (Chief Data Scientist at Movable Ink) presents two probabilistic models that systematically model both the depth and the breadth of natural languages for two different linguistic tasks: syntactic parsing and joint learning of named entity recognition and coreference resolution. The syntactic parsing model outperforms current state of the art models by discovering linguistic information shared across languages at the granular level of a sentence. The coreference resolution system is one of the first attempts at joint multilingual modeling of named entity recognition and coreference resolution with limited linguistic resources. It performs second best on three out of four languages when compared to state of the art systems built with rich linguistic resources. Andreea shows that we can simultaneously model both the depth and the breadth of natural languages using the underlying linguistic structure shared across languages.