Watson is a question answering computer system
capable of answering questions posed in natural language, developed in IBM's
DeepQA project by a research team led by principal investigator David Ferrucci.
Watson was named after IBM's first CEO, industrialist Thomas J. Watson. The
computer system was specifically developed to answer questions on the quiz show
Jeopardy! In 2011, Watson competed on Jeopardy! against former
winners Brad Rutter and Ken Jennings. Watson received the first place prize of
$1 million.
Watson had access to 200 million pages of structured and unstructured content consuming four terabytes of disk storage including the full text of Wikipedia, but was not connected to the Internet during the game. For each clue, Watson's three most probable responses were displayed on the television screen. Watson consistently outperformed its human opponents on the game's signaling device, but had trouble in a few categories, notably those having short clues containing only a few words.
In February 2013, IBM announced that Watson software system's first commercial application would be for utilization management decisions in lung cancer treatment atMemorial Sloan Kettering Cancer Center , New
York City, in conjunction with health insurance
company WellPoint. IBM Watson's former business chief, Manoj Saxena, says that 90% of nurses in
the field who use Watson now follow its guidance.
Watson is a question answering (QA) computing system that IBM built to apply advanced natural language processing, information retrieval, knowledge representation, automated reasoning, and machine learning technologies to the field of open domain question answering.
The key difference between QA technology and document search is that document search takes a keyword query and returns a list of documents, ranked in order of relevance to the query (often based on popularity and page ranking), while QA technology takes a question expressed in natural language, seeks to understand it in much greater detail, and returns a precise answer to the question.
According to IBM, "more than 100 different techniques are used to analyze natural language, identify sources, find and generate hypotheses, find and score evidence, and merge and rank hypotheses
Since Deep Blue's victory over Garry Kasparov in chess in 1997, IBM had been on the hunt for a new challenge. In 2004, IBM Research manager Charles Lickel, over dinner with coworkers, noticed that the restaurant they were in had fallen silent. He soon discovered the cause of this evening hiatus: Ken Jennings, who was then in the middle of his successful 74-game run on Jeopardy!. Nearly the entire restaurant had piled toward the televisions, mid-meal, to watch the phenomenon. Intrigued by the quiz show as a possible challenge for IBM, Lickel passed the idea on, and in 2005, IBM Research executive Paul Horn backed Lickel up, pushing for someone in his department to take up the challenge of playing Jeopardy! with an IBM system. Though he initially had trouble finding any research staff willing to take on what looked to be a much more complex challenge than the wordless game of chess, eventually David Ferrucci took him up on the offer. In competitions managed by theUnited
States government, Watson's predecessor, a
system named Piquant, was usually able to respond correctly to only about 35%
of clues and often required several minutes to respond. To compete successfully
on Jeopardy!, Watson would need to respond in no more than a few
seconds, and at that time, the problems posed by the game show were deemed to
be impossible to solve.
In initial tests run during 2006 by David Ferrucci, the senior manager of IBM's Semantic Analysis and Integration department, Watson was given 500 clues from past Jeopardy! programs. While the best real-life competitors buzzed in half the time and responded correctly to as many as 95% of clues, Watson's first pass could get only about 15% correct. During 2007, the IBM team was given three to five years and a staff of 15 people to solve the problems. By 2008, the developers had advanced Watson such that it could compete with Jeopardy! champions. By February 2010, Watson could beat human Jeopardy! contestants on a regular basis
Watson had access to 200 million pages of structured and unstructured content consuming four terabytes of disk storage including the full text of Wikipedia, but was not connected to the Internet during the game. For each clue, Watson's three most probable responses were displayed on the television screen. Watson consistently outperformed its human opponents on the game's signaling device, but had trouble in a few categories, notably those having short clues containing only a few words.
In February 2013, IBM announced that Watson software system's first commercial application would be for utilization management decisions in lung cancer treatment at
Description of Watson
Watson is a question answering (QA) computing system that IBM built to apply advanced natural language processing, information retrieval, knowledge representation, automated reasoning, and machine learning technologies to the field of open domain question answering.
The key difference between QA technology and document search is that document search takes a keyword query and returns a list of documents, ranked in order of relevance to the query (often based on popularity and page ranking), while QA technology takes a question expressed in natural language, seeks to understand it in much greater detail, and returns a precise answer to the question.
According to IBM, "more than 100 different techniques are used to analyze natural language, identify sources, find and generate hypotheses, find and score evidence, and merge and rank hypotheses
Development
Since Deep Blue's victory over Garry Kasparov in chess in 1997, IBM had been on the hunt for a new challenge. In 2004, IBM Research manager Charles Lickel, over dinner with coworkers, noticed that the restaurant they were in had fallen silent. He soon discovered the cause of this evening hiatus: Ken Jennings, who was then in the middle of his successful 74-game run on Jeopardy!. Nearly the entire restaurant had piled toward the televisions, mid-meal, to watch the phenomenon. Intrigued by the quiz show as a possible challenge for IBM, Lickel passed the idea on, and in 2005, IBM Research executive Paul Horn backed Lickel up, pushing for someone in his department to take up the challenge of playing Jeopardy! with an IBM system. Though he initially had trouble finding any research staff willing to take on what looked to be a much more complex challenge than the wordless game of chess, eventually David Ferrucci took him up on the offer. In competitions managed by the
In initial tests run during 2006 by David Ferrucci, the senior manager of IBM's Semantic Analysis and Integration department, Watson was given 500 clues from past Jeopardy! programs. While the best real-life competitors buzzed in half the time and responded correctly to as many as 95% of clues, Watson's first pass could get only about 15% correct. During 2007, the IBM team was given three to five years and a staff of 15 people to solve the problems. By 2008, the developers had advanced Watson such that it could compete with Jeopardy! champions. By February 2010, Watson could beat human Jeopardy! contestants on a regular basis
In healthcare,
Watson's natural language, hypothesis generation, and evidence-based learning
capabilities are being investigated to see how Watson may contribute to clinical
decision support systems for use by medical professionals. To aid physicians in
the treatment of their patients, once a physician has posed a query to the
system describing symptoms and other related factors, Watson first parses the
input to identify the most important pieces of information; then mines patient
data to find facts relevant to the patient's medical and hereditary history;
then examines available data sources to form and test hypotheses; and finally
provides a list of individualized, confidence-scored recommendations. The
sources of data that Watson uses for analysis can include treatment guidelines,
electronic medical record data, notes from physicians and nurses, research
materials, clinical studies, journal articles, and patient information. Despite
being developed and marketed as a "diagnosis and treatment advisor",
Watson has never been actually involved in the medical diagnosis process, only
in assisting with identifying treatment options for patients who have already
been diagnosed.
IBM Watson Group
On January 9,
2014 IBM announced it was creating a business unit around Watson, led by senior
vice president Michael Rhodin. IBM Watson Group will have headquarters in New York 's Silicon Alley
and will employ 2,000 people. IBM has invested $1 billion to get the division
going. Watson Group will develop three new cloud-delivered services: Watson
Discovery Advisor, Watson Engagement Advisor, and Watson Explorer. Watson
Discovery Advisor will focus on research and development projects in pharmaceutical
industry, publishing, and biotechnology, Watson Engagement Advisor will focus
on self-service applications using insights on the basis of natural language
questions posed by business users, and Watson Explorer will focus on helping
enterprise users uncover and share data-driven insights based on federated
search more easily. The company is also launching a $100 million venture fund
to spur application development for "cognitive" applications.
According to IBM, the cloud-delivered enterprise-ready Watson has seen its
speed increase 24 times over—a 2,300 percent improvement in performance, and
its physical size shrank by 90 percent—from the size of a master bedroom to
three stacked pizza boxes. IBM CEO Virginia Rometty said she wants Watson to generate
$10 billion in annual revenue within ten years.
No comments:
Post a Comment