Human Computation & Crowdsourcing

Exploring the design and evaluation of systems that leverage the crowdsourcing and online communities to solve complex tasks.

My research in Human Computation and Crowdsourcing explores the design and evaluation of systems that leverage the collective intelligence of online communities to solve complex tasks. I have investigated various applications of crowdsourcing, contributed with specific techniques and empirical results, as well as methodological contributions.

Key Contributions:

  • Exploration of Task Designs for Popular Crowdsourcing Tasks: Investigating task designs and quality control mechanisms to effectively leverage crowdsourcing for complex classification tasks, labelling, and diversity-aware approaches for paraphrase generation.
  • Methodological Advancements in Crowdsourcing Experimentation: Developing tools like ‘CrowdHub’ and guidelines for improving the rigor and reporting of controlled crowdsourcing experiments.
  • Crowdsourcing for AI Training Data Generation: Developing techniques for crowdsourcing diverse paraphrases for chatbot training and creating datasets for supporting classification tasks, directly supporting the development of robust and effective AI models.
  • Democratizing Research Support: Exploring and implementing crowdsourcing applications to provide feedback to researchers, especially early-stage researchers, and to support cognitively intensive research tasks like systematic literature reviews.

Code and Tools

Publications

  1. baez2012innovation.png
    Innovation cockpit: a dashboard for facilitators in idea management
    Marcos Baez, and Gregorio Convertino
    In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work Companion, Seattle, Washington, USA, Jul 2012
  2. Designing a facilitator’s cockpit for an idea management system
    Marcos Baez, and Gregorio Convertino
    In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work Companion, Seattle, Washington, USA, Jul 2012
  3. saldivar2016idea.png
    Idea Management Communities in the Wild: An Exploratory Study of 166 Online Communities
    Jorge Saldivar, Marcos Baez, Carlos Rodriguez, and 2 more authors
    In 2016 International Conference on Collaboration Technologies and Systems (CTS), Oct 2016
  4. korovina2018investigating.png
    Investigating Crowdsourcing as a Method to Collect Emotion Labels for Images
    Olga Korovina, Fabio Casati, Radoslaw Nielek, and 2 more authors
    In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal QC, Canada, Oct 2018
  5. ramirez2018crowdrev.png
    CrowdRev: A platform for Crowd-based Screening of Literature Reviews
    Jorge Ramirez, Evgeny Krivosheev, Marcos Baez, and 3 more authors
    Oct 2018
  6. krivosheev2018combining.png
    Combining Crowd and Machines for Multi-predicate Item Screening
    Evgeny Krivosheev, Fabio Casati, Marcos Baez, and 1 more author
    Proc. ACM Hum.-Comput. Interact., Nov 2018
  7. Crowdsourcing for reminiscence chatbot design
    Svetlana Nikitina, Florian Daniel, Marcos Baez, and 1 more author
    In HCOMP 2018 Works in Progress and Demonstration Papers, Nov 2018
  8. Understanding the impact of text highlighting in crowdsourcing tasks
    Jorge Ramı́rez, Marcos Baez, Fabio Casati, and 1 more author
    In Proceedings of the AAAI conference on human computation and crowdsourcing, Nov 2019
  9. ramirez2019crowdhub.png
    CrowdHub: Extending crowdsourcing platforms for the controlled evaluation of tasks designs
    Jorge Ramı́rez, Simone Degiacomi, Davide Zanella, and 3 more authors
    arXiv preprint arXiv:1909.02800, Nov 2019
  10. Reliability of crowdsourcing as a method for collecting emotions labels on pictures
    Olga Korovina, Marcos Baez, and Fabio Casati
    BMC research notes, Nov 2019
  11. convertino2013idea.png
    Idea spotter and comment interpreter: Sensemaking tools for idea management systems
    Gregorio Convertino, A Sándor, and Marcos Baez
    In ACM Communities and Technologies Workshop: Large-Scale Idea Management and Deliberation Systems Workshop, Nov 2013
  12. Crowdsourced dataset to study the generation and impact of text highlighting in classification tasks
    Jorge Ramı́rez, Marcos Baez, Fabio Casati, and 1 more author
    BMC Research Notes, Nov 2019
  13. 🏆 DREC: towards a Datasheet for Reporting Experiments in Crowdsourcing
    Jorge Ramı́rez, Marcos Baez, Fabio Casati, and 2 more authors
    In Companion Publication of the 2020 Conference on Computer Supported Cooperative Work and Social Computing, Virtual Event, USA, Nov 2020
  14. On the impact of predicate complexity in crowdsourced classification tasks
    Jorge Ramı́rez, Marcos Baez, Fabio Casati, and 4 more authors
    In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, Virtual Event, Israel, Nov 2021
  15. ramirez2020challenges.png
    Challenges and strategies for running controlled crowdsourcing experiments
    Jorge Ramı́rez, Marcos Baez, Fabio Casati, and 2 more authors
    In 2020 XLVI Latin American Computing Conference (CLEI), Oct 2020
  16. ramirez2021state.png
    🏆 On the State of Reporting in Crowdsourcing Experiments and a Checklist to Aid Current Practices
    Jorge Ramı́rez, Burcu Sayin, Marcos Baez, and 4 more authors
    Proc. ACM Hum.-Comput. Interact., Oct 2021
  17. Understanding How Early-Stage Researchers Perceive External Research Feedback
    Yuchao Jiang, Marcos Baez, and Boualem Benatallah
    In ACM Collective Intelligence Conference 2021, Oct 2021
  18. Crowdsourcing Diverse Paraphrases for Training Task-oriented Bots
    Jorge Ramı́rez, Auday Berro, Marcos Baez, and 2 more authors
    arXiv preprint arXiv:2109.09420, Oct 2021
  19. ramirez2022crowdsourcing.png
    Crowdsourcing syntactically diverse paraphrases with diversity-aware prompts and workflows
    Jorge Ramı́rez, Marcos Baez, Auday Berro, and 2 more authors
    In Advanced Information Systems Engineering, Oct 2022
  20. jiang2024understanding.png
    Understanding how early-stage researchers leverage socio-technical affordances for distributed research support
    Yuchao Jiang, Boualem Benatallah, and Marcos Baez
    Information and Software Technology, Oct 2024
  21. jiang2023rsourcer.png
    Rsourcer: Scaling Feedback on Research Drafts
    Yuchao Jiang, Boualem Benatallah, and Marcos Baez
    In Intelligent Information Systems, Oct 2023
  22. jiang2023towards.png
    Towards Scaling External Feedback for Early-Stage Researchers: A Survey Study
    Yuchao Jiang, Marcos Baez, and Boualem Benatallah
    In Cooperative Information Systems: 29th International Conference, CoopIS 2023, Groningen, The Netherlands, October 30-November 3, 2023, Proceedings, Groningen, The Netherlands, Oct 2023