Machine Learning for Creativity

Workshop on

Machine Learning for Creativity

Project Magenta

Website

Association for Computational Creativity

Website

Call for papers

About

All of us must have dreamt of having our own JARVIS (refer, Marvel comics) which can help us write poetry, paint a mural, compose a melody, choreograph a dance, or even write a research paper for this workshop! It is true that machine learning has not only solved challenging problems in the areas of speech, vision, natural language etc. but also hit the headlines by winning against humans in grand challenges such as Jeopardy, Go, and more recently Poker. Yet one of the elusive goals of artificial intelligence remains human-level creativity. All attempts to emulate creativity artificially fall under the umbrella of an emerging field called computational creativity. The goal of this workshop is to generate interest among the machine learning and data science community in this upcoming field by concentrating on applications of machine learning in creative domains. This workshop creates a forum for researchers and practitioners to exchange ideas and decide on the future roadmap of this field.



Topics of Interest

There has already been considerable interest generated among artists and designers in assistive tools and frameworks to create new and original content. We believe that the current limitations of achieving a purely creative machine can be alleviated using the advances in machine learning.
Suggested topics of paper submission for this workshop include, but not restricted to:


  • Formulations/perspectives about creativity.
  • Evaluation metrics for creativity.
  • Learning paradigms for creativity.
  • Large scale analytics with creativity understanding.
  • Case studies of creative generation process.
  • Insights into solutions/models for creativity.
  • Identifying and mining creative content.
  • Creativity vs Popularity/Likability.
  • Surveys or benchmark datasets related to creative technologies.
  • Assistive Creative tools for professionals and end-users.
  • Frameworks tuned for specific fields like speech, vision and natural language.
  • Domain adaptation for creativity.
  • Personalized content generation.
  • Creative conversational tools.
  • Recommendation models for creative applications.
  • Reinforcement learning for self-adaption with interactions.
  • Multi-modal systems for creativity.
  • Applications specific to professions like art, dance, music, literature, gaming, movie, fashion, recipe, education etc.
  • Interfaces for creative human-computer interaction.
  • Collaborative frameworks for creative domains.


Invited Speakers



Important Dates

May 26, 2017
Paper submission

June 16, 2017
Paper notification

Aug 14, 2017
Workshop

*All deadlines are at 11:59 PM Pacific Standard Time (PST)


Organizing Team


Lav

Lav R. Varshney

University of Illinois at Urbana-Champaign


Douglas

Douglas Eck

Google Brain (Project Magenta)


Kush

Kush R. Varshney

IBM Research


Anush

Anush Sankaran

IBM Research


Priyanka

Priyanka Agrawal

IBM Research


Disha

Disha Shrivastava

IBM Research


Program Committee

  • Mitesh Khapra, Indian Institute of Technology, Madras
  • Anirban Laha, IBM Research
  • Saneem CG, IBM Research
  • Haizi Yu, University of Illinois at Urbana-Champaign
  • Flavio du Pin Calmon, Harvard University
  • Mark Riedl, Georgia Institute of Technology
  • Parag Jain, IBM Research
  • Karthikeyan Natesan Ramamurthy, IBM Research
  • Prasanna Sattigeri, IBM Research
  • Ravi Kothari, IBM Research
  • Ashish Verma, IBM Research
  • Sameep Mehta, IBM Research
  • Vikas Raykar, IBM Research
  • Arvind Agarwal, IBM Research

Submission Guidelines

We solicit submission of papers of 4 to 10 pages representing reports of original research, preliminary research results, survey and dataset papers, case studies, proposals for new work, and position papers. We also seek poster submissions based on recently published work (please indicate the conference published).


Following KDD conference tradition, reviews are not double-blind, and author names and affiliations should be listed. If accepted, at least one of the authors must attend the workshop to present the work. The submitted papers must be written in English and formatted in the double column standard according to the ACM Proceedings Template, Tighter Alternate style. The papers should be in PDF format and submitted via the EasyChair submission site. The workshop website will archive the published papers.





Share on

References


  • Florian Pinel, Lav R. Varshney, and Debarun Bhattacharjya. "A culinary computational creativity system." In Computational creativity research: towards creative machines, pp. 327-346. Atlantis Press, 2015. (Link)
  • Lav R. Varshney, Jun Wang, and Kush R. Varshney. "Associative Algorithms for Computational Creativity." The Journal of Creative Behavior, 2015. (Link)
  • Anna Kantosalo and Hannu Toivonen, “Modes for Creative Human-Computer Collaboration: Alternating and Task-Divided Co-Creativity”, Proceedings of the Seventh International Conference on Computational Creativity (ICCC), 2016. (Link)
  • David Norton, Derrall Heath and Dan Ventura, “Accounting for Bias in the Evaluation of Creative Computational Systems: An Assessment of DARCI”, Proceedings of the International Conference on Computational Creativity, pp. 31-38, 2015. (Link)
  • Jun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman and Alexei A. Efros, "Generative Visual Manipulation on the Natural Image Manifold", in European Conference on Computer Vision (ECCV) 2016. (Link)
  • Stakeholder Groups in Computational Creativity Research and Practice Computational Creativity Research: Towards Creative Machines, Atlantis Thinking Machines, 2014. (Link)
  • Computational Creativity Research: Towards Creative Machines, Editors: Besold, Tarek Richard, Schorlemmer, Marco, Smaill, Alan (Eds.), Atlantis Press. (Link)
  • Bray L, Bown O, 'Ludic Human-Computer Co-Creation', in Proceedings of the Australasian Computer Music Conference, UTS, Sydney, 2015. (Link)

© 2017 Anush Sankaran. All Rights Reserved.