INF385T/CS395T: Human Computation and Crowdsourcing (Fall 2017)
THIS COURSE IS CROSS-LISTED; IF ONE SECTION IS FULL, PLEASE ENROLL IN THE OTHER. All students will receive the same credit toward graduation requirements regardless of which section they enroll in.
ON THE WAITLIST? I will do my best to ensure that any graduate student who wants to be in the class can enroll. Show up the first day of class and I will probably be able to get you in.
Registration notes specific to Computer Science (CS) students:
Textbook: none required, all readings online
Prerequisites: No prior knowledge is required; all interested and motivated students are invited to attend. This course typically attracts significant student participation across a wide variety of disciplines: information science, computer science, linguistics, electrical engineering, and design studies. Course activities are intended to serve the needs of both (1) those studying to work professionally in the area or conduct research in IR, and (2) non-specialists interested in gaining broader exposure and understanding of human computation and crowdsourcing methods and systems.
Course summary. This graduate seminar will review the latest research in human computation and crowdsourcing by reading peer-reviewed conference and journal papers. Students will work individually or in pairs on a self-selected, semester-long course project. The course culminates in a public poster session where course projects are presented
Global growth in Internet connectivity and participation is driving a renaissance in human computation: use of people rather than machines to perform certain computations for which human competency continues to exceed that of state-of-the-art algorithms (e.g. AI-hard tasks such as interpreting text or images). Just as cloud computing now enables us to harness vast Internet computing resources on demand, crowdsourcing lets us similarly call upon the online crowd to manually perform human computation tasks on-demand. As crowd computing expands traditional accuracy-time-cost tradeoffs associated with purely-automated approaches, the potential to achieve these enhanced capabilities has begun to change how we design and implement intelligent systems.
While early work in crowd computing focused simply on collecting more data from crowds to train automated systems, we are increasingly seeing a new form of hybrid, socio-computational system emerge which harnesses collective intelligence of the crowd in combination with automated AI at run-time in order to better tackle difficult processing tasks. As such, we find ourselves today in an exciting new design space, where the potential capabilities of tomorrow.s computing systems is seemingly limited only by our imagination and creativity in designing algorithms to compute with crowds as well as silicon.
Examples of human computation systems: DuoLingo · EyeWire · FoldIt · GalaxyZoo · MonoTrans · Legion:Scribe · Mechanical Turk · PlateMate · ReCaptcha · Soylent · Ushahidi · VizWiz
Introductions to Human Computation and Crowdsourcing:
Advances in research have also translated into a thriving private sector, with many existing startups and opportunities for more.
Want to publish original research?
In previous offerings of the course, several of the best, most innovative course projects have
been extended beyond the semester until the work was in publishable form. If you have a great
idea and are willing to work hard to get it published, the course project provides a great
opportunity to refine the idea and get started developing the project with regular feedback and
advising from the instructor. Examples of past course projects that were subsequently published include (see publications for links):
How to post your course paper online as a technical report? See an example from a previous semester.
Looking for a funded Research Assistant (RA) position? I typically do not offer RA positions until a student has taken a course with me and demonstrated their abilities and drive to succeed. While the availability of an RA position depends on available funding, I am often looking for new RAs to help me advance the current state-of-the-art in research.
About the instructor. Associate Professor Matthew Lease directs the Information Retrieval and Crowdsourcing Lab in the School
of Information at the University of Texas at Austin. He received his Ph.D. and M.Sc. degrees in
Computer Science from Brown University, and his B.Sc. in Computer Science from the University of
Washington. His research on crowdsourcing / human computation and information retrieval has been
with early career awards by NSF, IMLS, and others. Lease and co-authors received the Best Paper Award at the 2016 AAAI HCOMP for effective use of crowdsourcing to collecting high quality search relevance judgments. Lease has presented crowdsourcing tutorials
at ACM SIGIR, ACM WSDM, CrowdConf, and SIAM Data Mining (talk slides available online). From 2011-2013, he
co-organized the Crowdsourcing Track for
the U.S. National Institute of Standards & Technology (NIST) Text REtrieval Conference (TREC). In
2012, Lease spent the summer working on industrial-scale crowdsourcing at CrowdFlower.