ICER 2024
Mon 12 - Thu 15 August 2024 Melbourne, Victoria, Australia
Dates
Plenary
You're viewing the program in a time zone which is different from your device's time zone change time zone

Tue 13 Aug

Displayed time zone: Brisbane change

08:30 - 08:45
RegistrationCatering
08:45 - 09:15
Opening RemarksCatering
08:45
30m
Day opening
Opening Remarks
Catering

09:15 - 10:15
Student Perceptions and Self-AssessmentResearch Papers
Chair(s): Brian Dorn University of Nebraska at Omaha
09:15
20m
Talk
Understanding the Reasoning Behind Students' Self-Assessments of Ability in Introductory Computer Science Courses
Research Papers
Melissa Chen Northwestern University, Yinmiao Li Northwestern University, Eleanor O'Rourke Northwestern University
09:35
20m
Talk
"In the Beginning, I Couldn't Necessarily Do Anything With It": Links Between Compiler Error Messages and Sense of Belonging
Research Papers
Maja Dornbusch University of Münster, Jan Vahrenhold University of Münster
Link to publication DOI
09:55
20m
Talk
Exploring the Interplay of Metacognition, Affect, and Behaviors in an Introductory Computer Science Course for Non-Majors
Research Papers
Yinmiao Li Northwestern University, Melissa Chen Northwestern University, Ayse Hunt Northwestern University, Haoqi Zhang Northwestern University, Eleanor O'Rourke Northwestern University
10:15 - 11:00
CoffeeCatering
10:15
45m
Coffee break
Break
Catering

11:00 - 12:00
Learning InterventionsResearch Papers
Chair(s): Sebastian Dziallas University of the Pacific
11:00
20m
Talk
Scaffolding Novices: Analyzing When and How Parsons Problems Impact Novice Programming in an Integrated Science Assignment
Research Papers
Benyamin Tabarsi North Carolina State University, Heidi Reichert North Carolina State University, Nicholas Lytle Georgia Institute of Technology, Veronica Catete North Carolina State University, Tiffany Barnes North Carolina State University
11:20
20m
Talk
Evaluating the Effectiveness of a Testing Checklist Intervention in CS2: An Quasi-experimental Replication Study
Research Papers
Gina Bai Vanderbilt University, Zuoxuan Jiang Vanderbilt University, Thomas Price North Carolina State University, Kathryn Stolee North Carolina State University
11:40
20m
Talk
Evaluating How Novices Utilize Debuggers and Code Execution to Understand Code
Research Papers
Mohammed Hassan University of Illinois at Urbana-Champaign, Grace Zeng University of Illinois at Urbana-Champaign, Craig Zilles University of Illinois at Urbana-Champaign
12:00 - 13:15
12:00
75m
Lunch
Lunch
Catering

13:15 - 14:15
GenAI and Computing Education (I)Research Papers
Chair(s): Judy Sheard Monash University
13:15
20m
Talk
Debugging with an AI Tutor: Investigating Novice Help-seeking Behaviors and Perceived Learning
Research Papers
Stephanie Yang Harvard University, Hanzhang Zhao Harvard Graduate School of Education, Yudian Xu Harvard Graduate School of Education, Karen Brennan Harvard Graduate School of Education, Bertrand Schneider Harvard Graduate School of Education
13:35
20m
Talk
Evaluating Contextually Personalized Programming Exercises Created with Generative AI
Research Papers
Evanfiya Logacheva Aalto University, Arto Hellas Aalto University, James Prather Abilene Christian University, Sami Sarsa University of Jyväskylä, Juho Leinonen Aalto University
Link to publication DOI Pre-print
13:55
20m
Talk
Insights from Social Shaping Theory: The Appropriation of Large Language Models in an Undergraduate Programming Course
Research Papers
Aadarsh Padiyath University of Michigan, Xinying Hou University of Michigan, Amy Pang University of Michigan, Diego Viramontes Vargas University of Michigan, Xingjian Gu University of Michigan, Tamara Nelson-Fromm University of Michigan, Zihan Wu University of Michigan, Mark Guzdial University of Michigan, Barbara Ericson University of Michigan
Pre-print
14:15 - 15:00
CoffeeCatering
14:15
45m
Coffee break
Break
Catering

15:00 - 16:00
Student ChallengesResearch Papers
Chair(s): Juho Leinonen Aalto University
15:00
20m
Talk
Seeking Consent for Programming Process Data Collection with Trustee-Based Encryption
Research Papers
Björn Fischer RheinMain University of Applied Sciences, Wiesbaden, Germany, Berit Barthelmes University of Zurich, Zurich, Switzerland, Sven Eric Panitz RheinMain University of Applied Sciences, Wiesbaden, Germany, Eva-Maria Iwer RheinMain University of Applied Sciences, Wiesbaden, Germany, Ralf Dörner RheinMain University of Applied Sciences, Wiesbaden, Germany
15:20
20m
Talk
Influence of Personality Traits on Plagiarism Through Collusion in Programming Assignments
Research Papers
Parthasarathy PD BITS Pilani KK Birla Goa Campus, Ishaan Kapoor BITS Pilani, KK Birla Goa Campus, Swaroop Joshi BITS Pilani KK Birla Goa Campus, Sujith Thomas BITS Pilani KK Birla Goa Campus
15:40
20m
Talk
Students Struggle with Concepts in Dijkstra’s Algorithm
Research Papers
Artturi Tilanterä Aalto University, Juha Sorva Aalto University, Otto Seppälä Aalto University, Ari (Archie) Korhonen Aalto University
16:00 - 16:20
16:00
20m
Coffee break
Break
Catering

16:20 - 17:00
Interactive Learning ChallengesResearch Papers
Chair(s): Carol Fletcher Texas Advanced Computing Center
16:20
20m
Talk
Probeable Problems for Beginner-level Programming-with-AI Contests
Research Papers
Mrigank Pawagi Indian Institute of Science, Bengaluru, Viraj Kumar Indian Institute of Science, India
Pre-print
16:40
20m
Talk
Distractors Make You Pay Attention: Investigating the Learning Outcomes of Including Distractor Blocks in Parsons Problems
Research Papers
David Smith University of Illinois at Urbana-Champaign, Seth Poulsen University of Illinois at Urbana-Champaign, Chinny Emeka University of Illinois at Urbana-Champaign, Zihan Wu University of Michigan, Carl Haynes-Magyar Carnegie Mellon University, Craig Zilles University of Illinois at Urbana-Champaign

Wed 14 Aug

Displayed time zone: Brisbane change

09:00 - 09:15
AnnouncementsCatering
09:15 - 10:15
Teaching Practices (I)Research Papers
Chair(s): Quintin Cutts University of Glasgow, UK
09:15
20m
Talk
Instructional Transparency: Just to Be Clear, It's a Good Thing
Research Papers
Vidushi Ojha Harvey Mudd College, Andrea Watkins University of Illinois Urbana-Champaign, Christopher Perdriau University of Illinois at Urbana-Champaign, Kathleen Isenegger University of Illinois at Urbana-Champaign, Colleen M. Lewis University of Illinois at Urbana-Champaign
09:35
20m
Talk
Exploring the Effects of Grouping by Programming Experience in Q&A Forums
Research Papers
Naaz Sibia University of Toronto Mississauga, Angela Zavaleta Bernuy University of Toronto, Tiana V. Simovic University of Toronto, Chloe Huang University of Toronto, Yinyue Tan University of Toronto, Eunchae Seong University of Toronto, Carolina Nobre University of Toronto, Dan Zingaro University of Toronto Mississauga, Michael Liut University of Toronto Mississauga, Andrew Petersen University of Toronto
09:55
20m
Talk
Teaching Digital Accessibility in Computing Education: Views of Educators in India
Research Papers
Parthasarathy PD BITS Pilani KK Birla Goa Campus, Swaroop Joshi BITS Pilani KK Birla Goa Campus
10:15 - 11:00
CoffeeCatering
10:15
45m
Coffee break
Break
Catering

11:00 - 11:40
Equity and Diversity (I)Research Papers
Chair(s): Andrew Petersen University of Toronto
11:00
20m
Talk
Exploring the Impact of Assessment Policies on Marginalized Students' Experiences in Post-Secondary Programming Courses
Research Papers
Eman Sherif University of Washington, Jayne Everson University of Washington, Megumi Kivuva University of Washington, Seattle, Mara Kirdani-Ryan University of Washington, Amy Ko University of Washington
11:20
20m
Talk
Invisible Women in IT: Examining Gender Representation in K-12 ICT Teaching Materials
Research Papers
Melissa Høegh Marcher IT University of Copenhagen, Denmark, Ingrid Maria Christensen IT University of Copenhagen, Denmark, Nanna Inie IT University of Copenhagen, Center for Computing Education (CCER), Claus Brabrand IT University of Copenhagen
12:00 - 13:15
12:00
75m
Lunch
Lunch
Catering

13:15 - 14:15
Understanding StudentsResearch Papers
Chair(s): Jan Vahrenhold University of Münster
13:15
20m
Talk
Validating, Refining, and Identifying Programming Plans Using Learning Curve Analysis on Code Writing Data
Research Papers
Mehmet Arif Demirtas University of Illinois Urbana-Champaign, Max Fowler University of Illinois, Nicole Hu University of Illinois Urbana-Champaign, Kathryn Cunningham University of Illinois Urbana-Champaign
DOI Pre-print
13:35
20m
Talk
An Electroencephalography Study on Cognitive Load in Visual and Textual Programming
Research Papers
Sverrir Thorgeirsson ETH Zurich, Chengyu Zhang ETH Zurich, Theo B. Weidmann ETH Zurich, Karl-Heinz Weidmann University of Applied Sciences Vorarlberg, Zhendong Su ETH Zurich
13:55
20m
Talk
Profiling Conversational Programmers at University: Insights into their Motivations and Goals from a Broad Sample of Non-Majors
Research Papers
Jinyoung Hur University of Illinois Urbana-Champaign, Kathryn Cunningham University of Illinois Urbana-Champaign
DOI Pre-print
15:20 - 16:00
Data and ScalabilityResearch Papers
Chair(s): Barbara Ericson University of Michigan
15:20
20m
Talk
Overcoming Barriers in Scaling Computing Education Research Programming Tools: A Developer’s Perspective
Research Papers
Keith Tran North Carolina State University, John Bacher North Carolina State University, Yang Shi North Carolina State University, James Skripchuk North Carolina State University, Thomas Price North Carolina State University
15:40
20m
Talk
Learning an Explanatory Model of Data-Driven Technologies can Lead to Empowered Behavior: A Mixed-Methods Study in K-12 Computing Education
Research Papers
Lukas Höper Paderborn University, Carsten Schulte University of Paderborn, Andreas Mühling Kiel University
16:00 - 16:20
16:00
20m
Coffee break
Break
Catering

16:20 - 17:00
Student SupportResearch Papers
Chair(s): Amy Ko University of Washington
16:20
20m
Talk
The Trees in the Forest: Characterizing Computing Students' Individual Help-Seeking Approaches
Research Papers
Shao-Heng Ko Duke University, Kristin Stephens-Martinez Duke University
16:40
20m
Talk
Regulation, Self-Efficacy, and Participation in CS1 Group Work
Research Papers
Carolin Wortmann University of Münster, Jan Vahrenhold University of Münster
Link to publication DOI

Thu 15 Aug

Displayed time zone: Brisbane change

09:00 - 09:15
AnnouncementsCatering
09:15 - 10:15
Teaching Practices (II)Research Papers
Chair(s): Craig Zilles University of Illinois at Urbana-Champaign
09:15
20m
Talk
Perpetual Teaching Across Temporary Places: Conditions, Motivations, and Practices of Media Artists Teaching Computing Workshops
Research Papers
Alice Chung University of California, San Diego, Philip Guo University of California at San Diego
Pre-print
09:35
20m
Talk
Evaluating Exploratory Reading Groups for Supporting Undergraduate Research Pipelines in Computing
Research Papers
David M. Torres-Mendoza University of California, Santa Cruz, Saba Kheirinejad University of Oulu, Mustafa Ajmal University of California, Santa Cruz, Ashwin Chembu University of California Davis, Dustin Palea University of California, Santa Cruz, Jim Whitehead University of California, Santa Cruz, David Lee University of California, Santa Cruz
09:55
20m
Talk
Layering Sociotechnical Cybersecurity Concepts Within Project-Based Learning
Research Papers
Brandt Redd University of Utah, Ying Tang Southwest University, Hadar Ziv University of California, Irvine, Sameer Patil University of Utah
Link to publication DOI
10:15 - 11:00
CoffeeCatering
10:15
45m
Coffee break
Break
Catering

11:00 - 11:40
Equity and Diversity (II)Research Papers
Chair(s): Mark Guzdial University of Michigan
11:00
20m
Talk
Debugging for Inclusivity in Online CS Courseware: Does it Work?
Research Papers
Amreeta Chatterjee Oregon State University, Rudrajit Choudhuri Oregon State University, Mrinmoy Sarkar Flockby, Soumiki Chattopadhyay Oregon State University, Dylan Liu Oregon State University, Samarendra Hedaoo Oregon State University, Margaret Burnett Oregon State University, Anita Sarma Oregon State University
Link to publication DOI Pre-print
11:20
20m
Talk
Beyond "Awareness": If We Teach Inclusive Design, Will Students Act On It?
Research Papers
Rosalinda Garcia Oregon State University, Patricia Morreale Kean University, Pankati Patel Kean University, Jimena Noa Guevara Oregon State University, Dahana Moz-Ruiz Kean University, Sabyatha Sathish Kumar Oregon State University, Prisha Velhal Oregon State University, Alec Busteed Oregon State University, Margaret Burnett Oregon State University
Link to publication DOI Pre-print
12:00 - 13:15
12:00
75m
Lunch
Lunch
Catering

13:15 - 14:15
GenAI and Computing Education (II)Research Papers
Chair(s): Kathryn Cunningham University of Illinois Urbana-Champaign
13:15
20m
Talk
Using Benchmarking Infrastructure to Evaluate LLM Performance on CS Concept Inventories: Challenges, Opportunities, and Critiques
Research Papers
Murtaza Ali University of Washington, Prerna Rao University of Washington, Yifan Mai Stanford University, Benjamin Xie Stanford University
DOI Pre-print
13:35
20m
Talk
The Widening Gap: The Benefits and Harms of Generative AI for Novice Programmers
Research Papers
James Prather Abilene Christian University, Brent Reeves Abilene Christian University, Juho Leinonen Aalto University, Stephen MacNeil Temple University, Arisoa Randrianasolo Abilene Christian University, Brett Becker University College Dublin, Bailey Kimmel Abilene Christian University, Jared Wright Abilene Christian University, Ben Briggs Abilene Christian University
Link to publication DOI Pre-print
13:55
20m
Talk
An Investigation of the Drivers of Novice Programmers’ Intentions to Use Web Search and GenAI
Research Papers
James Skripchuk North Carolina State University, John Bacher North Carolina State University, Thomas Price North Carolina State University
15:20 - 16:00
Ethical PracticesResearch Papers
Chair(s): Benjamin Xie Stanford University
15:20
20m
Talk
Integrating Philosophy Teaching Perspectives to Foster Adolescents' Ethical Sensemaking of Computing Technologies
Research Papers
Rotem Landesman University of Washington, Jean Salac University of Washington, Seattle, Jared Ordona Lim Georgia Institute of Technology, Amy Ko University of Washington
15:40
20m
Talk
"It's Not Exactly Meant to Be Realistic": Student Perspectives on the Role of Ethics In Computing Group Projects
Research Papers
Michelle Tran University of Colorado Boulder, Casey Fiesler University of Colorado Boulder
16:00 - 16:40
Closing BusinessCatering

Accepted Papers

Title
An Electroencephalography Study on Cognitive Load in Visual and Textual Programming
Research Papers
An Investigation of the Drivers of Novice Programmers’ Intentions to Use Web Search and GenAI
Research Papers
Beyond "Awareness": If We Teach Inclusive Design, Will Students Act On It?
Research Papers
Link to publication DOI Pre-print
Debugging for Inclusivity in Online CS Courseware: Does it Work?
Research Papers
Link to publication DOI Pre-print
Debugging with an AI Tutor: Investigating Novice Help-seeking Behaviors and Perceived Learning
Research Papers
Distractors Make You Pay Attention: Investigating the Learning Outcomes of Including Distractor Blocks in Parsons Problems
Research Papers
Evaluating Contextually Personalized Programming Exercises Created with Generative AI
Research Papers
Link to publication DOI Pre-print
Evaluating Exploratory Reading Groups for Supporting Undergraduate Research Pipelines in Computing
Research Papers
Evaluating How Novices Utilize Debuggers and Code Execution to Understand Code
Research Papers
Evaluating the Effectiveness of a Testing Checklist Intervention in CS2: An Quasi-experimental Replication Study
Research Papers
Exploring the Effects of Grouping by Programming Experience in Q&A Forums
Research Papers
Exploring the Impact of Assessment Policies on Marginalized Students' Experiences in Post-Secondary Programming Courses
Research Papers
Exploring the Interplay of Metacognition, Affect, and Behaviors in an Introductory Computer Science Course for Non-Majors
Research Papers
Influence of Personality Traits on Plagiarism Through Collusion in Programming Assignments
Research Papers
Insights from Social Shaping Theory: The Appropriation of Large Language Models in an Undergraduate Programming Course
Research Papers
Pre-print
Instructional Transparency: Just to Be Clear, It's a Good Thing
Research Papers
Integrating Philosophy Teaching Perspectives to Foster Adolescents' Ethical Sensemaking of Computing Technologies
Research Papers
"In the Beginning, I Couldn't Necessarily Do Anything With It": Links Between Compiler Error Messages and Sense of Belonging
Research Papers
Link to publication DOI
Invisible Women in IT: Examining Gender Representation in K-12 ICT Teaching Materials
Research Papers
"It's Not Exactly Meant to Be Realistic": Student Perspectives on the Role of Ethics In Computing Group Projects
Research Papers
Layering Sociotechnical Cybersecurity Concepts Within Project-Based Learning
Research Papers
Link to publication DOI
Learning an Explanatory Model of Data-Driven Technologies can Lead to Empowered Behavior: A Mixed-Methods Study in K-12 Computing Education
Research Papers
Overcoming Barriers in Scaling Computing Education Research Programming Tools: A Developer’s Perspective
Research Papers
Perpetual Teaching Across Temporary Places: Conditions, Motivations, and Practices of Media Artists Teaching Computing Workshops
Research Papers
Pre-print
Probeable Problems for Beginner-level Programming-with-AI Contests
Research Papers
Pre-print
Profiling Conversational Programmers at University: Insights into their Motivations and Goals from a Broad Sample of Non-Majors
Research Papers
DOI Pre-print
Regulation, Self-Efficacy, and Participation in CS1 Group Work
Research Papers
Link to publication DOI
Scaffolding Novices: Analyzing When and How Parsons Problems Impact Novice Programming in an Integrated Science Assignment
Research Papers
Seeking Consent for Programming Process Data Collection with Trustee-Based Encryption
Research Papers
Students Struggle with Concepts in Dijkstra’s Algorithm
Research Papers
Teaching Digital Accessibility in Computing Education: Views of Educators in India
Research Papers
The Trees in the Forest: Characterizing Computing Students' Individual Help-Seeking Approaches
Research Papers
The Widening Gap: The Benefits and Harms of Generative AI for Novice Programmers
Research Papers
Link to publication DOI Pre-print
Understanding the Reasoning Behind Students' Self-Assessments of Ability in Introductory Computer Science Courses
Research Papers
Using Benchmarking Infrastructure to Evaluate LLM Performance on CS Concept Inventories: Challenges, Opportunities, and Critiques
Research Papers
DOI Pre-print
Validating, Refining, and Identifying Programming Plans Using Learning Curve Analysis on Code Writing Data
Research Papers
DOI Pre-print

Call for Papers

Aims and Scope

The 20th annual ACM Conference on International Computing Education Research (ICER) aims to gather high-quality contributions to the Computing Education Research discipline. The “Research Papers” track invites submissions describing original research results related to any aspect of teaching and learning computing, from introductory through advanced material. Submissions are welcome from across the research methods used in Computing Education Research and related fields. Each contribution will be assessed based on:

  • the appropriateness and soundness of its methods
  • its relevance to teaching or learning computing, and
  • the depth of its contribution to the community’s understanding of the question at hand.

Research areas of particular interest include:

  • design-based research, learner-centered design, and evaluation of educational technology supporting computing knowledge or skills development,
  • discipline based education research (DBER) about computing, computer science, and related disciplines,
  • informal learning experiences related to programming and software development (all ages), ranging from after-school programs for children, to end-user development communities, to workplace training of computing professionals,
  • learnability of programming languages and tools for learning programming and computing concepts,
  • learning analytics and educational data mining in computing education contexts,
  • learning sciences work in the computing content domain,
  • measurement instrument development and validation (e.g., concept inventories, attitudes scales, etc) for use in computing disciplines,
  • pedagogical environments fostering computational thinking,
  • psychology of programming,
  • rigorous replication of empirical work, relevant to computing education, to compare with or extend previous empirical research results,
  • professional development for computing educators at all levels.

While this above list is non-exclusive, authors are also invited to consider the call for papers for the “Lightning Talks & Posters” and “Work-in-Progress” tracks if in doubt about the suitability of their work for this track.

Please see the Submission Instructions for details on how to prepare your submission. It includes links to the relevant ACM policies including the ACM Policy on Plagiarism, Misrepresentation, and Falsification as well as the ACM Publications Policy on Research Involving Human Participants and Subjects.

All questions about this call should go to the ICER 2024 program committee chairs at pc-chairs@icer.acm.org.

Important Dates

All submission deadlines are “anywhere on Earth” (AoE, UTC-12).

What When
Titles, abstracts, and authors due. (The chairs will use this information to assign papers to PC members.) Friday, March 22nd, 2024
Full paper submission deadline Friday, March 29th, 2024
Decisions announced Tuesday, May 21st, 2024
“Conditional Accept” revisions due Thursday, May 30th, 2024
“Conditional Accept” revisions approval notification Thursday, June 6th, 2024
Final versions due to TAPS Wednesday, June 12th, 2024
Published in the ACM Digital Library The official publication date is the date the proceedings are made available in the ACM Digital Library. This date will be the first day of the conference. The official publication date may affect the deadline for any patent filings related to published work.

Submission Process

Submit at the ICER 2024 HotCRP site.

When you submit the abstract or full version ready for review, you need to perform the following actions:

  • Check the checkbox “ready for review” at the bottom of the submission form. (Otherwise it will be marked as a draft).

  • Check the checkbox “I have read and understood the ACM Publications Policy on Research Involving Human Participants and Subjects”. Note: “Where such research is conducted in countries where no such local governing laws and regulations related to human participant and subject research exist, Authors must at a bare minimum be prepared to show compliance with the above detailed principles.”

  • Check the checkbox “I have read and understood the ACM Policy on Plagiarism, Misrepresentation, and Falsification; in particular, no version of this work is under submission elsewhere.”. Make sure to disclose possible overlap with your own previous work (“redundant publication”) to the ICER Program Committee co-chairs.

  • Check the checkbox “I have read and understood the ICER Anonymization Policy” (see below).

ICER Anonymization Policy

ICER research paper submissions will be reviewed using a double-anonymous process: the authors do not know the identity of the reviewers and the reviewers do not know the identity of the authors. To ensure this:

  • Avoid titles that indicate a clearly identifiable research project.

  • Remove author names and affiliations. (If you are using LaTeX, you can start your document declaration with \documentclass[manuscript,review,anonymous]{acmart} to easily anonymize these.

  • Avoid referring to yourself when citing your own work.

  • Redact (just for review) portions of positionality statements that would identify you within the community (perhaps due to demographics shared by few others).

  • Avoid references to your affiliation. For example, rather than referring to your actual university, you might write “A Large Metropolitan University (ALMU)” rather than “Auckland University of Technology (AUT)”.

  • Redact any other identifying information such as contributors, course numbers, IRB names and numbers, grant titles and numbers, from the main text and the acknowledgements.

  • Omit author details from the PDF you generate, such as author name or the name of the source document. These are often automatically inserted into exported PDFs, so be sure to check your PDF before submission.

Do not simply cover identifying details with a black box, as the text can easily be seen from under the box by dragging the cursor over it, and will still be read by screen readers.

Work that is not sufficiently anonymized will be desk-rejected by the PC chairs without offering an option to redact and resubmit.

Authoring Guidelines

The ICER conference maintains an evolving author guide, full of recommendations about scope, statistics, qualitative methods, theory, and other concerns that may arise when drafting your submission. These guidelines are a ground truth for reviewers; study them closely as you plan your research and prepare your submission.

Conflict of Interests

The SIGCSE Conflict of Interest policy applies to all submissions. You can review how conflicts will be managed by consulting our reviewer training, which details our review process.

Submission Format and Publication Workflow

Papers submitted to the research track of ICER 2023 have to be prepared according to the ACM TAPS workflow system. Read this page carefully to understand the new workflow.

Starting in 2021, ICER switched to a publication format (called TAPS) that separates content from presentation in support of accessibility. This means that the submission format and the publication format differ. For submission, we standardize on a single-column presentation.

  • The submission template is either the single column Word Submission Template or the single column LaTeX (using the “manuscript,review,anonymous” style available in template, which you can see an example of in the sample-manuscript.tex example in the LaTeX master template samples). Reviewers will review in this single column format. You can download these templates on the ACM Master Article Templates page
  • The publication template is either the single column Word Submission Template or LaTeX template using “sigconf” style in acmart. You can download the templates on the ACM TAPS workflow page page, where you can also see example papers using the TAPS-compatible Word and LaTeX templates. If your paper is accepted, you will use the TAPS system to generate your final publication outputs. This will involve more than just submitting a PDF, requiring you to instead submit your Word or LaTeX source files and fix any errors in your source before the final version deadline listed above. The final published versions will be the ACM two-column conference PDF format (as well as XML, HTML, and ePub formats in the future).

For LaTeX users, be aware that there is a list of approved LaTeX packages for use with ACM TAPS. Not all packages are allowed.

This separation of submission and publication format results in several benefits:

  • Improved quality of paper metadata, improving ACM Digital Library search.
  • Multiple paper output formats, including PDFs, responsive HTML5, XML, and ePub.
  • Improved accessibility of paper content for people with disabilities.
  • Streamlined publication timelines.

Submission Length

Authors may submit papers up to 11,000 words in length, excluding acknowledgements, references, figures, but including all other text, including tables.  To clarify, “all other text” does include appendices. ICER papers must be self-contained in the sense that reviewers can assess the contribution without referring to any external material. Appendices in the submitted PDF are considered to be part of the main text and thus are subject to the word count. If authors want to provide additional material, e.g., codebooks, they must do so in an anonymized way via an external web resource of their choice; reviewers will neither be required nor asked, however, to consult such resources when assessing a paper’s contribution.  The PC chairs will use the following procedures for counting words for TAPS approved formats:

  • For papers written in the Microsoft Word template, Word’s built-in word-count mechanism will be used, selecting all text except acknowledgements and references.
  • For papers written in the LaTeX template, the document will be converted to plain text using the “ExtractText” functionality of the Apache pdfbox suite (see here) and then post processed with a standard command-line word count tool (“wc -w”, to be precise). Line numbers added by the “review” class option for LaTeX will be removed prior to counting by using “grep -v -E ‘^[0-9]+$’” (thanks to N. Brown for this).
    • We acknowledge that many authors may want to use Overleaf to avoid dealing with command-line tools and, consequently, may be less enthusiastic about using another command-line tool for assessing the word count. As it is configured by default, Overleaf does not count text in tables, captions, and math formula and, thus, is very likely to significantly underestimate the number obtained through the tool described above. To obtain a more realistic word count during the writing of the manuscript, authors need to take these additional steps:
      • Add the following lines at the very beginning of your Overleaf LaTeX document:
      %TC:macro \cite [option:text,text]
      %TC:macro \citep [option:text,text]
      %TC:macro \citet [option:text,text]
      %TC:envir table 0 1
      %TC:envir table* 0 1
      %TC:envir tabular [ignore] word
      %TC:envir displaymath 0 word
      %TC:envir math 0 word
      %TC:envir comment 0 0
      
      • Make sure to write math formulae delimited by \begin{math} \end{math} for in-line math and \begin{displaymath} \end{displaymath} for equations. Do not use dollar signs or \[ \]; these will result in Overleaf not counting math tokens (unlike Word and pdfbox) and thus underestimate your word count.
    • The above flags will ensure that in-text citations, tables, and math formulae will be counted but that comments will be ignored.
    • The above flags do not cover more advanced LaTeX environments, so if authors use such environments, they should interpret the Overleaf word count with care (then again, if authors know how to work with such environments it is very reasonable to assume that they also know how to work with command-line tools such as pdfbox).
    • Authors relying on Overleaf word count should be advised that the submission chairs will not have access to the source files and cannot re-run or verify any counting mechanism done by the submitting authors. To provide a fair treatment across all submission types, only the approved tools mentioned above will be used for word count. That said, submission chairs will operate under a bona fide assumption when it comes to extreme borderline cases.
  • Papers in either format may not use figures to render text in ways that work around the word count limit; papers abusing figures in this way will be desk-rejected.

A paper under the word count limit with either of the above approved tools is acceptable. The submissions chairs will evaluate each submission using the procedures above, notify the PC chairs of papers exceeding the limit, and desk-reject any papers that do.

We expect papers to vary in word count. Abstracts may vary in length, less than 300 words is a good guideline for conciseness. Submission length should be commensurate with its contributions; we expect most papers to be less than 9,000 words according to the rules above, though some may use up to the limit in order to convey details authors deem necessary to evaluate the work. Papers may be judged as too long if they are repetitive, verbose, violate formatting rules, or use figures to save on word count. Papers may be judged as too short if they omit critical details or ignore relevant prior work. See the reviewer training for more on how reviewers will be instructed to assess conciseness.

All of the procedures above, and the TAPS workflow, will likely undergo continued iteration in partnership with ACM, the ICER Steering Committee, and the SIGCSE board. Notify the chairs of questions, edge cases, and other concerns to help improve this new workflow.

Acceptance and Conditional Acceptance

All papers recommended for acceptance after the Senior PC meetings are either accepted or conditionally accepted. For accepted papers, there is no resubmission required other than the final camera-ready version. For conditionally-accepted papers, meta-reviews will indicate one or more minor revisions that are necessary for final acceptance; authors are responsible for submitting these revisions to HotCRP prior to the “Conditional Accept revisions due” deadline in the Call for Papers. The Senior PC and Program Chairs will review the final revisions; if they are acceptable, the paper will be officially accepted, and authors will have one week to submit an approved camera-ready version to TAPS for publication. If the Senior PC and Program Chairs judge that the request for revisions were not suitably addressed, the paper will be rejected.

Because the turnaround time for conditional acceptance is only one week, requested revisions will necessarily be minor: they may include presentation issues or requests for added clarity or details helpful for future readers of the archived paper. New results, new methodological details that change the interpretation of the results, or other substantially new content will neither be asked for nor allowed to be added.

Kudos

After a paper has been accepted and uploaded into the ACM Digital Library, authors will receive an invitation from Kudos to create an account and add plain-language text into Kudos on its platform. The Kudos “Shareable PDF” integration with ACM will then allow an author to generate a PDF to upload to websites, such as author homepages, institutional repositories, and preprint services, such as ArXiv. This PDF contains the author’s plain-text summary of the paper as well as a link to the full-text version of an article in the ACM Digital Library, adding to the DL download and citation counts there, as well as adding views from other platforms to the author’s Kudos dashboard.

Using Kudos is entirely optional. Authors may also use the other ACM copyright options to share their work (retaining copyright, paying for open access, etc.).

ACM Publications Policy

By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM’s new Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy.

Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process for your accepted paper.  ACM has been involved in ORCID from the start and we have recently made a commitment to collect ORCID IDs from all of our published authors.  We are committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.

If you are reading this page, you are probably considering submitting to ICER. Congratulations! We are excited to review your work. Whether your research is just starting or nearly finished, this guide is intended to help authors meet the expectations of the computing education research community. It reflects a community-wide perspective on what constitutes rigorous research on the teaching and learning of computing.

Read on for our community’s current guidelines, and if you like, read our reviewer guidelines to understand our review process and review criteria.

What’s in scope at ICER?

ICER’s goal is to be an inclusive conference, both with respect to epistemology (how we know we know things) and with respect to phenomena (who is learning and in what context). Therefore, any research related to the teaching and learning of computing is in scope, using any definition of computing, and using any methods. We particularly encourage work that goes beyond the community’s past focus on introductory programming courses in post-secondary education: such as work on primary and secondary education, work on more advanced computing concepts, informal learning in any setting or learning amongst adults. (However, note that simply using computing technology to perform research in an educational setting is not in itself enough, the focus must be on the teaching or learning of computing topics.) If you have not seen a particular topic published on a topic at ICER, or you have not seen a particular method be used, that is okay. We value new topics, new methods, new perspectives, and new ideas, just as much as more broadly accepted ones.

That said, under the current review process, we cannot promise that we have recruited all the necessary expertise to our program committee to fairly review your work. Check who is on the program committee this year, and if you do not see a lot of expertise on your methods or phenomena, make sure your submission spends a bit of extra time explaining theories or methods that reviewers are unlikely to know. If you have any questions regarding this, email the program chairs (pc-chairs@icer.acm.org).

Note that we used the word “research” above. Research is hard to define, but we can say that ICER is not a place to submit practical descriptions of courses, curriculum, or instruction materials you want to share. If you’re looking to share your experiences at a conference, consider submitting to the SIGCSE Technical Symposium’s Experience Report or Position and Curricula Initiatives tracks. Research, in contrast, should meet the criteria presented throughout this document.

What makes a good computing education research paper?

It’s impossible to anticipate every kind of paper that might be submitted. The current ICER review criteria are listed in the reviewer guidelines. These will evolve over time as the community grows. There are many other criteria that reviews could discuss in relation to specific types of research contributions, but the criteria listed there are generally inclusive to many epistemologies and contribution types. This includes empirical studies that answer research questions, replicate prior results, or present negative research results as well as other, non-empirical types of research that provide novel or deepened insights into the teaching and learning of computer science content.

What prior work should be cited?

As with any research work, your submission should cite all significant publications that are relevant to your research questions. With respect to ICER submissions, this may include not only work that has been published in ACM-affiliated venues like ICER, ITiCSE, SIGCSE, Koli Calling, but also the wide range of conferences and journals in the learning sciences, education, educational psychology, HCI, and software engineering. If you are new to research, consider guides on study design and surveys of prior work like the 2019 Cambridge Handbook of Computing Education Research, which attempts to survey most of what we know about computing education up to 2018.

Papers will be judged on how adequately they are grounded in prior work published across academia. They will also be assessed regarding their accuracy of citing related work: read what you cite closely and ensure the discoveries in published work are supporting your claims; many of the authors of the works you are likely to cite are members of the computing education research community and may be your reviewers. Finally, papers will also be expected to return to prior work in a discussion of a paper’s contributions. All papers should explain how the paper’s contributions advance upon prior work, cause us to reinterpret prior work, or reveal conflicts with prior work.

How might theory be used?

Different disciplines across academia vary greatly on how they use and develop theory. At the moment, the position of the community is that theory can be a useful tool for framing research, connecting it to prior work, and interpreting findings. Papers can also contribute new theories, or refine them. However, it may also be possible for papers to be atheoretical, discovering interesting new relationships or interventions that cannot yet be explained. All of these uses of theory are appropriate.

It is also possible to misuse theory. Sometimes the theories used are too general for a question, where a theory more specific to computing education might be appropriate. In other cases, a theory might be wrongly applied to some phenomena, or a paper might use a theory that has been discredited. Be careful when using theory to understand its history, its body of evidence in support of and against its claims, and its scope of relevance.

Note that our community has discussed the role of theory multiple times, and that conversations about how to use theory are evolving:

  • Nelson and Ko (2018) argued that there are tensions between expectations of theory building and innovative exploration of design ideas, and that our field’s theory building should focus on theories specific to computing education.

  • Malmi et al. (2019) found that while computing education researchers have widely cited many dozens of unique theoretical ideas about learning, behavior, beliefs, and other phenomena, the use of theory in the field remains somewhat shallow.

  • Kafai et al. (2019) argued that there are many types of theories, and that we should more deeply leverage their explanatory potential, especially theories about the sociocultural and societal factors at play in computing education, not just the cognitive factors.

In addition to using theories when appropriate, ICER encourages the contribution of new theories. There is not a community-level consensus on what constitutes a good theory contribution, but there are examples you might learn from. Papers proposing a new theoretical model should consider including concrete examples of said model.

How should educational contexts be described?

If you’re reporting empirical work in a specific education context or set of contexts, it is important to remember that our research community is global, and that education systems across the world are structured differently. This is of particular importance when describing research that took place in primary and secondary schools. Keep in mind that not all readers can be familiar with your educational context. Describe the structure of the educational system. Define terminology related to your education system. Characterize who is teaching, and what prior knowledge and preparation they have. When describing learners, at a minimum, describe their gender, race, ethnicity, age, level in school, and prior knowledge (assuming collecting and publishing this type of data is legal in the context in which the study was conducted, see also the ACM Publications Policy on Research Involving Human Participants and Subjects). Include information about other structural factors that might affect how the results are interpreted, including whether courses are required or elective, what incentives students have to enrol in courses, how students in courses vary. For authors in the United States, common terminology to avoid include “elementary school”, “middle school”, “high school”, and “college”, which do not have well-defined meanings elsewhere. Use the more common globally inclusive phrases “primary”, “secondary”, and “post-secondary”. Given the broad spectrum of, e.g., introductory computing courses that run under the umbrella of “CS1”, make sure to provide enough information on the course content rather than relying on an assumed shared understanding.

What details should we report about our methods?

ICER values a wide range of methods of all kinds, including quantitative, qualitative, design, argumentation, and more. It is critical to describe your methods in detail, both so that reviewers and readers can understand how you arrived at your conclusions, and so they can evaluate the appropriateness of your methods both to the work and, for readers, to their own contexts.

Some contributions might benefit from following the Center for Open Science’s recommendations to ensure replicable, transparent science. These include practices such as:

  • Data should be posted to a trusted repository.

  • Data in that repository is properly cited in the paper.

  • Any code used for analysis is posted to a trusted repository.

  • Results are independently reproduced.

  • Materials used for the study are posted to a trusted repository.

  • Studies and their analysis plans are pre-registered prior to being conducted.

Our community is quite far from adopting any of these standards as expectations. Additionally, pursuing many of these goals might impose significant barriers to conducting research ethically, as educational data can often not be sufficiently anonymized to prevent disclosing identity. Therefore, these supplementary materials are not required for review, but we encourage you to include them where feasible and ethical.

The ACM has adopted a new policy on Research Involving Human Participants and Subjects that requires research to be conducted in accordance with ethical and legal standards. In accordance with the policy, your methods description should briefly describe how these standards were met. This can be as simple as a sentence that your study design was reviewed by a local review board (IRB), or a few sentences with key details if you engaged with human subjects and an IRB review was not appropriate to your context or work. Read the ACM policy for additional details.

How should we report statistics?

The world is moving beyond p-values, but computing education, like most of academia, still relies on them. When reporting the results of statistical hypothesis tests, it is critical to report:

  • The test used

  • The rationale for choosing the test, including a discussion of the data characteristics that allowed this test to be used

  • The test statistic computed

  • The actual p-value (not just whether it was greater than or less than an arbitrary threshold)

  • An effect size and its confidence intervals.

Effect sizes are especially relevant, as they indicate the extent to which something impacts or explains some phenomena in computing education; small effect sizes might not be that significant to learning. The above data should be reported regardless of whether a hypothesis test was significant. Chapters that introduce statistical methods can be found in the Cambridge Handbook of Computing Education Research.

Do not assume that reviewers or future readers have a deep understanding of statistical methods (although they might). If you’re using more advanced or non-standard techniques, justify them in detail, so that the reviewers and future readers understand your choice of methods. We recognize that length limits might prevent a detailed explanation of methods for entirely unfamiliar readers; reviewers are expected to not criticize papers for excluding extensive explanations when there was not space to include them.

How should we report on qualitative methods?

Best practices in other fields for addressing the reliability of qualitative methods suggest providing detailed arguments and rationale for qualitative approaches and analyses. Some fields that rely on qualitative methods have moved toward a recoverability criterion, which like replicability in quantitative methods, aims to ensure a study’s core methods are available for inspection and interpretation; however, recoverability does not imply repeatability, as qualitative methods rely on interpretation, which may not be repeatable.

When qualitative data is counted and used for quantitative methods, authors should report on the inter-rater reliability (IRR) of the qualitative judgements underlying those counts. There are many ways of calculating inter-rater reliability, each with tradeoffs. However, note that IRR analysis is not ubiquitous across social sciences, and not always appropriate; authors should make a clear soundness argument for why it was or was not performed.

Another challenge in reporting qualitative results is that they require more space in a paper; an abundance of quotes, after all, may take considerably more space than a table full of aggregate statistics. Be careful to provide enough evidence of your claims, while being mindful with your use of space.

What makes a good abstract?

A good abstract should summarize the question your paper asks and what answers it found. It is not enough to just say “We discuss our results and their implications”; say what you actually discovered, so future readers can learn that from your summary.

If your paper is empirical in nature, ICER recommends (but does not require) using a structured abstract that contains the following sections, each 1-2 sentences:

  • Background and Context. What is the problem space you are working in? Which phenomena are you considering and why are they relevant and important for an ICER audience?

  • Objectives. What research questions were you trying to answer?

  • Method. What did you do to answer your research questions?

  • Findings. What did you discover? Both positive and negative results should be summarized.

  • Implications. What implications does your discovery have on prior and future research, and on the practice of computing education?

Not all papers may fit this structure, but if yours does, it will greatly help reviewers and future readers understand your paper’s research design and contribution.

What counts as plagiarism?

Read ACM’s policy on Plagiarism, Misrepresentation, and Falsification; these criteria will be applied during review. In particular, attention will be paid to avoiding redundant publication.

Who should be an author on my paper?

ICER follows ACM’s Authorship Policy and Publications Policy on the Withdrawal, Correction, Retraction, and Removal of Works from ACM Publications and ACM DL. These state that any person listed as an author on a paper must (1) have made substantial contributions to the work, (2) have participated in drafting/revising the paper, (3) be aware that the paper has been submitted, and (4) agree to be held accountable for the content of the paper. Note that this policy allows enforcement of plagiarism sanctions, but it could impact people who work in large, collaborative research groups, and on postgraduate advisors who have not contributed directly to a paper.

Must submissions be in English?

At the moment, yes. Our reviewing community’s only lingua franca is English, and any other language would greatly limit the pool of expert reviewers to evaluate your work. We recognize that this is a challenging barrier for many authors globally, and that it greatly limits the diversity of voices in global discourse on computing education. Therefore, we wish to express our support of other computing education conferences around the world that you might consider submitting papers to. To mitigate this somewhat, papers will not be penalized for minor English spelling and grammar errors that can easily be corrected with minor revisions.

Resources

American Educational Research Association. (2006). Standards for reporting on empirical social science research in AERA publications. Educational Researcher, 35(6), 33–40. http://edr.sagepub.com/content/35/6/33.full.pdf+html.

Decker, A,, McGill, M. M., & Settle, A (2016). Towards a Common Framework for Evaluating Computing Outreach Activities. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education (SIGCSE ’16). ACM, New York, NY, USA, 627-632. DOI: https://doi.org/10.1145/2839509.2844567.

Fincher, S. A., & Robins, A. V. (Eds.). (2019). The Cambridge Handbook of Computing Education Research. Cambridge University Press. DOI: https://dx.doi.org/10.1017/9781108654555.

Petre, M., Sanders, K., McCartney, R., Ahmadzadeh, M., Connolly, C., Hamouda, S., Harrington, B., Lumbroso, J., Maguire, J., Malmi, L., McGill, M.M., Vahrenhold, J. (2020). Mapping the Landscape of Peer Review in Computing Education Research, In: ITiCSE-WGR ’20: Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education, ACM. New York, NY, USA, 173–209. DOI: https://doi.org/10.1145/3437800.3439207.

ICER 2024 Review Process and Guidelines

This document is a living document intended to capture the reviewing policies of the ICER community. Please email the Program Co-Chairs at pc-chairs@icer.acm.org with comments or questions; all will be taken into account when updating this document for ICER 2025.

Based on the ICER 2020/2021 Reviewing Guidelines (Amy Ko & Anthony Robins & Jan Vahrenhold) as well as the ICSE 2022 Reviewing Guidelines (Daniela Damian & Andreas Zeller). We are thankful for the input on these earlier documents provided by members of the ICER community.

Table of Contents

  1. Goals of the ICER Reviewing Process
  2. Action Items
  3. Submission System
  4. Roles in the Review Process
  5. Principles Behind ICER Reviewing
  6. Conflicts of Interest
  7. The Reviewing Process
  8. Review Criteria
  9. Award Recommendations
  10. Possible Plagiarism, Misrepresentation, and Falsification
  11. Practical Suggestions for Writing Reviews

1. Goals of the ICER Reviewing Process

The ICER Reviewing Process as outlined in this document is designed to support reaching the following goals:

  • Accept high quality papers
  • Give clear feedback to papers of insufficient quality
  • Evaluate papers consistently
  • Provide transparency in the review process
  • Embrace diversity of perspectives, but work in an inclusive, safe, collegial environment
  • Drive decisions by consensus among reviewers
  • Strive for manageable workload for PC members
  • Do our best on all of the above

2. Action Items

Prior to continuing to read this document, please do the following:

PC/Reviewers

Key dates:

  • Prior to March 29, 2024: Familiarize yourself with the ICER 2024 Reviewing Guidelines
  • March 22 – 29, 2024: Bid on papers and declare conflicts
  • March 29 – April 26, 2024: Review roughly 5 papers, depending on submission volume
  • April 26 – May 6, 2024: Asynchronously discuss papers with other reviewers and the Senior PC members assigned to your papers.

SPC/Meta-Reviewers

Key dates:

  • Prior to March 29, 2024: Familiarize yourself with the ICER 2024 Reviewing Guidelines
  • March 22 – 29, 2024: Bid on papers and declare conflicts
  • March 29 – April 26, 2024: Monitor the reviewing of 7-9 papers, depending on submission volume
  • April 26 – May 6, 2024: Asynchronously discuss papers with the reviewers assigned to those papers, prepare meta-review and recommendation.
  • Tuesday May 7, 2024: Complete submission of meta-reviews for all papers
  • Friday May 10 – Wednesday May 15, 2024: Skim the paper and read reviews for 1-2 submissions (which you did not handle) that will be discussed at the Senior PC meeting.
  • Wednesday May 15 – Friday May 17, 2024: Participate in synchronous, online SPC meetings to discuss and decide on papers without consensus and finalize your meta-reviews.  Based on last year’s experiences, we are currently planning for up to four such virtual meetings to accommodate different time zones.  As last year, we also expect that your participation in two of these meetings would be necessary and sufficient.  Overall, expect it to be a four-hours commitment during this week, plus time to review your 1-2 assigned discussion papers.
  • Thursday May 30 – Thursday June 6, 2024: Re-check (a hopefully very, very small number of) minor revisions against the requests made in “conditional accept” decisions.

If you are new to reviewing in the Computing Education Research community, the following ITiCSE Working Group Report may serve as an introduction:

  • Petre M, Sanders K, McCartney R, Ahmadzadeh M, Connolly C, Hamouda S, Harrington B, Lumbroso J, Maguire J, Malmi L, McGill MM, Vahrenhold J. 2020. “Mapping the Landscape of Peer Review in Computing Education Research.” In ITiCSE-WGR ’20: Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education, edited by Rφίling G, Krogstie B, 173-209. New York, NY: ACM Press. doi: 10.1145/3437800.3439207.

3. Submission System

ICER 2024 uses the HotCRP platform for its reviewing process. If you are unfamiliar with this, you will find a basic tutorial below. But first, make sure you can sign in, then bookmark it: http://icer2024.hotcrp.com If you have trouble signing in, or you need help with anything, contact Rodrigo Duran rodrigo.duran@ifms.edu.br and Juho Leinonen juho.2.leinonen@aalto.fi, the ICER 2024 submission chairs, for help. Make sure that you can log in to HotCRP and that your name and other metadata is correct. Check that emails from HotCRP are not marked as spam and that HotCRP email notifications are enabled.

4. Roles in the Review Process

Program Committee (PC) Chairs

Each year there are two program committee co-chairs. The PC chairs are solicited by the ICER steering committee and appointed by the SIGCSE board to serve a two-year term. One new appointment is made each year so that in any given year there is always a continuing program chair from the prior year and a new program chair. Appointment criteria include prior attendance and publication at ICER, past service on the ICER Program Committee, research excellence in Computing Education, collaborative and organizational skills to share oversight of the program selection process. The ICER Steering Committee solicits and selects candidates for future PC chairs.

Program Committee (PC) Members / Reviewers

PC members write reviews of submissions, evaluating them against the review criteria. The PC chairs invite and appoint the reviewers. The committee is sized so that each reviewer will serve for 5-6 paper submissions, or more depending on the size of the submissions pool. Each reviewer will serve a one-year term, with no limits on reappointment. Appointment criteria include expertise in relevant areas of computing education research and past reviewing experience in computing education research venues. Together, all reviewers constitute the program committee (PC). The PC chairs are responsible for inviting returning and new members of the PC, keeping in mind the various forms of diversity that are present at ICER.

Senior Program Committee Members (SPC) / Meta-Reviewers

SPC members review the PC members’ reviews, ensuring that the review content is constructive and aligned with the review criteria, as well as summarizing reviews and making recommendations for a paper’s acceptance and rejection. They also moderate discussions about each paper and provide feedback on reviews if necessary, asking reviewers to improve the quality of reviews. Finally, they participate in a synchronous SPC meeting to make final recommendations about each paper, and review authors’ minor revisions. The PC chairs invite and appoint Senior PC members, with the approval of the steering committee, again, keeping in mind the various forms of diversity that are present at ICER. Each Senior PC member can be appointed for up to three years in a row; after a hiatus of at least one year, preferably two years, re-appointment is possible. The committee is sized so that each meta-reviewer will handle 8-10 papers, depending on the submission pool.

5. Principles Behind ICER Reviewing

The ICER review process is designed to work towards these goals:

  • Maximize the alignment between a paper and expertise required to review it.
  • Minimize conflicts of interests and promoting trust in the process.
  • Maximize our community’s ability to make excellent, rigorous, trustworthy contributions to the science of computing education.

The call for papers and author guide should make this clear, but ICER is broadly scoped. The conference publishes research on teaching and learning of computer science content that happens in any context. In consequence, reviewers should not downgrade papers for being about a topic they personally perceive to be less important to computing education. If the work is sufficiently ready for publication and reviewers believe it is of interest to some part of the computing education community, it should be published such that the community can decide its importance over time.

6. Conflicts of Interest

ICER takes conflicts of interest, both real and perceived, quite seriously. The conference adheres to the ACM conflict of interest policy (https://www.acm.org/publications/policies/conflict-of-interest) as well as the SIGCSE conflict of interest policy (https://sigcse.org/policies/COI.html). These state that a paper submitted to the ICER conference is a conflict of interest for an individual if at least one of the following is true:

  • The individual is a co-author of the paper
  • A student of the individual is a co-author of the paper
  • The individual identifies the paper as a conflict of interest, i.e., that the individual does not believe that he or she can provide an impartial evaluation of the paper.

The following policies apply to conference organizers:

  • The chairs of any track are not allowed to submit to that track.
  • All other conference organizers are allowed to submit to any track.
  • All reviewers (PC members) and meta-reviewers (SPC members) are allowed to submit to any track.

No reviewer, meta-reviewer, or chair with a conflict of interest in the paper will be included in any evaluation, discussion, or decision about the paper. It is the responsibility of the reviewers, meta-reviewers, and chairs to declare their conflicts of interest throughout the process. The corresponding actions are outlined below for each relevant step of the reviewing process. It is the responsibility of the chairs to ensure that no reviewer or meta-reviewer is assigned a role in the review process for any paper for which they have a conflict of interest.

7. The Reviewing Process

Step 1: Authors Submit Abstracts

Authors will submit a title and abstract one week prior to assigning papers. Authors are allowed to revise their title and abstract before the full paper submission deadline.

Step 2: Reviewers and Meta-Reviewers Bid for Papers

Reviewers and meta-reviewers will be asked to bid on papers for which they have sufficient expertise (in both phenomena and methods) and then the PC chairs will assign papers based on these bids. The purpose of bidding is not to express interest in papers you want to read. It is to express your expertise and eligibility for fairly evaluating the work. These are subtly but importantly different purposes.

  • Specify all of your conflicts of interest. Conflicts are any situation where you have any connection with a submission that is in tension with your role as an independent reviewer (you advised an author, you have collaborated with an author, you are at the same institution, you are close friends, etc.). After declaring conflicts, you will be excluded from all future evaluation, discussion, and decisions of that paper. Program chairs and submissions chairs will also specify conflicts of interest at this time.
  • Bid on all of the papers you believe you have sufficient expertise to review. Sufficient expertise includes knowledge of research methods used and prior research on the phenomena. Practical knowledge of a topic is helpful, but insufficient.
  • Do not bid on papers about topics, techniques, or methods that you strongly oppose. That precludes authors from being fairly reviewed by authors with negative bias; see below for positive biases and how to control for them.

Step 3: Authors Submit Papers

Submissions are due one week after the abstracts are due. As you read in the submission instructions (https://icer2024.acm.org/track/icer-2024-papers#Submission-Instructions), submissions are supposed to be sufficiently anonymous that a reader cannot determine the identity or affiliation of the authors. The main purpose of ICER’s anonymous reviewing process is to reduce the influence of potential (positive or negative) biases on reviewers’ assessments. You should be able to review the work without knowing the authors or their affiliations. Do not try to find out the identity of authors. (Most guesses will be wrong anyway.) See the submission instructions for what constitutes sufficient anonymization. When in doubt, write the PC chairs for clarity at pc-chairs@icer.acm.org.

Step 4: PC Chairs Decide on Desk-Rejects

The PC chairs, with the help of the submissions chairs, will review each submission for papers that violate anonymization requirements, length restrictions, or plagiarism policies. Authors of desk rejected papers will be notified immediately. The PC chairs may not catch every issue. If you see something during review that you believe should be desk rejected, contact the chairs before you write a review; the PC chairs will make the final judgement about whether something is a violation, and give you guidance on whether and if so how to write a review.

Managing Conflicts of Interest

PC chairs with conflicts are excluded from deciding on desk rejected papers, leaving the decision to the other program chair.

Step 5: PC Chairs Assign Reviewers

Based on the bids and their judgement, the PC chairs will collaboratively assign at least three reviewers (PC members) and one meta-reviewer (SPC member) for each submission. The PC chairs will be advised by HotCRP’s assignment algorithm, which depends on all bids being high quality. Remember, for these assignments to be fair and good, your bids should only be based on your expertise and eligibility. Interest alone is not sufficient for bidding on a paper. The chairs will review the algorithm’s assignments to identify potential misalignments with expertise. Managing Conflicts of Interest PC chairs with conflicts are excluded from assigning reviewers to any papers for which they have a conflict. Assignments in HotCRP can only be made by a PC chair without a conflict.

Step 6a: Reviewers Review Papers

Assigned reviewers submit their anonymous reviews through HotCRP by the review deadline, evaluating each of their papers against the review criteria (see Review Criteria). The time allocated for reviews is four weeks in which 5-6 reviews need to be written. Due to the internal and external (publication) deadlines, there cannot be any extensions.

Managing Conflicts of Interest

Reviewers, meta-reviewers, and PC chairs with conflicts cannot see any of the reviews of the papers they are conflicted on during this process.

Step 6b: Meta-Reviewers and PC Chairs Monitor Progress

Meta-reviewers and PC chairs will periodically check in to ensure that progress is being made.

Step 7: Reviewers and Meta-Reviewers Discuss Reviews

After the reviewing period, the assigned meta-reviewer asks the reviewers to read the other reviewers’ reviews and begin a discussion about any disagreements that arise. All reviewers are asked to do the following:

  • Read all the reviews of all papers assigned (and re-read your own reviews).
  • Engage in a discussion about sources of disagreement.
  • Use the review criteria to guide your discussions.
  • Be polite, friendly, and constructive at all times.
  • Be responsive and react as soon as new information comes in.
  • Remain open to other reviewers shifting your judgements.

If your judgement does shift, update your review to reflect your new views. There is no need to indicate to the authors that you changed your review but do leave a comment for the other reviewers and the meta-reviewer indicating what you changed and why (HotCRP does not track changes). Discussing a paper is not about who wins or who is right. It is about how, in the light of all information, a group of reviewers can find the best decision on a paper. All reviewers (and the authors!) have their unique perspective and competence. It is perfectly normal that they may have seen things you have not, just as you may have seen things they have not. The important thing is to accept that the group will see more than the individual. Therefore, you can always (and are encouraged to!) shift your stance in light of the extra knowledge.

PC chairs will periodically check in. If you have configured HotCRP notifications correctly, you will be notified as soon as new information (another review or a new discussion item) about your paper comes in. It is important that you react to these, and as soon as possible. Do not let your colleagues wait for days when all that is needed is some short statement from your side.

Managing Conflicts of Interest

Reviewers, meta-reviewers, and PC chairs with conflicts cannot see any of the discussions of the papers they are conflicted on during this process.

Step 8: Meta-Reviewers Write Meta-Reviews

After the discussion phase, meta-reviewers use the reviews, the discussion, and their own evaluation of the work to write a meta-review and recommendation. A meta-review should summarize the key strengths and weaknesses of the paper, in light of the review criteria, and explain how these led to the decision. The summary and explanation should help the authors in revising their work where appropriate. A generic meta-review (“After long discussion, the reviewers decided that the paper is not up to ICER standards, and therefore rejected the paper”) is not sufficient. There are four possible meta-review recommendations: reject, discuss, conditional accept, and accept. The recommendation needs to be entered in the meta-review.

  • Reject. Ensure that the meta-review constructively summarizes the reviews and the rationale for rejection. The PC chairs will review all meta-reviews to ensure that reviews are constructive, and may request meta-reviewers to revise their meta-reviews as necessary. The PC chairs will make the final rejection decision based on the meta-review rationale; if necessary, this paper will be discussed at the SPC meeting.
  • Discuss. Ensure that the meta-review summarizes the open questions that need to be resolved at the SPC meeting discussion, where the paper will either be recommended as reject, conditional accept, or accept. Papers marked discussed will be scheduled for discussion at the SPC meeting. All papers for which the opinion of the meta-reviewer and the majority of reviewer recommendations do not align should be marked “discuss” as well.
  • Conditional Accept. Ensure that the meta-review explicitly and clearly states the conditions that must be met with minor revisions before the paper can be accepted. To accept with conditions, the conditions must be feasible to make within the one-week revision period, so they must be minor. The PC chairs will make the final decision on whether the requested revisions are minor enough to warrant conditional acceptance; if necessary, this paper will be discussed at the SPC meeting.
  • Accept. These papers will be accepted, assuming authors deanonymize the paper and meet the final version deadline. For technical reasons, “accept” recommendations are recorded internally as “conditional accept” recommendations that do not state any conditions for acceptance other than submitting the final version. The PC chairs will make the final acceptance decision based on the meta-review rationale; if necessary, this paper will be discussed at the SPC meeting.

Managing Conflicts of Interest

Reviewers, meta-reviewers, and PC chairs with conflicts cannot see any of the recommendations or meta-reviews of the papers they are conflicted on during this process.

Step 9: PC Chairs and Meta-Reviewers Discuss Papers

The PC chairs will host synchronous SPC meetings with all available meta-reviewers (SPC members) to discuss and decide on all “discuss” and “conditional accept” papers. Before this meeting, a second meta-reviewer will be assigned to each such paper, ensuring that there are at least two meta-reviewers to facilitate discussion. Each meta-reviewer assigned to a paper should come prepared to present the paper, its reviews, and the HotCRP discussion. Each meta-reviewer’s job is to present their recommendation, and/or if they requested discussion, present the uncertainty that prevents them from making one. All meta-reviewers who are available to attend a SPC meeting session should, at a minimum, skim each of the papers to be discussed and their reviews (excluding those for which they are conflicted), so they are familiar with the papers and their reviews prior to the discussions. At the meeting, the goal is to collectively reach consensus, rather than relying on the PC chairs alone to make final decisions. Papers may move from “discuss” to either “reject”, “conditional accept”, or “accept”; if there are conditions, they must be approved by a majority of the non-conflicted SPC and PC chairs at the discussion. After a decision is made in each case, the original SPC member will add a summary of the discussion at the end of their meta-review, explaining the rationale for the final decision, as well as any conditions for acceptance, and updating the recommendation tag in HotCRP.

Managing Conflicts of Interest

Meta-reviewers conflicted on a paper will not be assigned as a second reader. Any meta-reviewer or PC chair conflicted on a paper will be excluded from the paper’s discussion, returning after the discussion is over.

Step 10: PC Chair Review

Before announcing decisions, the non-conflicted PC chairs will review all meta-reviews to ensure as much clarity and consistency with the review process and its criteria as possible.

Managing Conflicts of Interest

PC chairs cannot change the outcome of an accept or reject decision after the SPC meeting.

Step 11: Notifications

After the SPC meeting, the PC chairs will notify all authors of the decisions about their papers; these notification will be via email through HotCRP. Papers that are (unconditionally) accepted will be encouraged to make any changes that may have been suggested but not required; papers that are conditionally accepted will be reminded of the revision evaluation deadline.

Step 12: Authors of Conditionally Accepted Papers Revise their Papers

Authors of conditionally accepted papers have one week to incorporate the requested revisions and to submit their final versions for review by the assigned meta-reviewer.

Step 13: Meta-Reviewers Check Revised Papers

Meta-reviewers will check the revised papers against the required revisions. Based on the outcome of this, they will change their recommendation to either “accept” or “reject” and will update their meta-reviews to reflect this.

Managing Conflicts of Interest

Reviewers, meta-reviewers, and PC chairs with conflicts cannot see any of the recommendations or meta-reviews of the papers they are conflicted on during this process.

Step 14: Notifications

PC chairs will sanity-check all comments on those papers for which revisions were submitted. Conditionally accepted papers for which not revisions were received will be marked as “reject”. PC chairs then finalize decisions. After this review, all recommendations will be converted to official accept or reject decisions in HotCRP and authors will be notified of these final decisions via email sent through HotCRP. Authors will then have one week to submit to ACM TAPS for final publication.

Managing Conflicts of Interest

Reviewers, meta-reviewers, and PC chairs with conflicts cannot see any of the recommendations or meta-reviews of the papers they are conflicted on during this process. PC chairs with conflicts cannot see or edit any final decision on these papers.

8. Review Criteria

ICER currently evaluates papers against the following reviewing criteria, as independently as possible. These have been carefully chosen to be inclusive to many phenomena, epistemologies, and contribution types.

To be published at ICER, papers should be positively evaluated on all of these. The summary of this is another criterion:

Below, we discuss each criterion in turn.

Criterion A: The submission is grounded in relevant prior work and leverages available theory when appropriate.

Papers should draw on relevant prior work and theories, and explicitly show how they are tied to the questions addressed. After reading the paper, one should feel more informed about prior literature and how that literature is related to the paper’s contributions. Such coverage of related work might come before a work’s contributions, or it might come after (e.g, connecting a new theory derived from observations to prior work. Note that not all types of research will have relevant theory to discuss, nor do all contribution types need theory to make significant advances. For example, a surprisingly robust but unexplained correlation might be an important discovery that later work could develop theory to explain. Reviewers should identify related work the authors might have missed and include pointers. Missing a paper that is relevant, but would not dramatically change the paper, is not sufficient grounds for rejecting a paper. Such citations can be added upon reviewers’ request prior to publication. Instead, criticism in reviews that leads to downgrading a paper should focus on missing prior work or theories that would significantly alter research questions, analysis, or interpretation of results.

Guidelines for (Meta-)Reviewers

Since prior work and theories needs to be covered sufficiently and in a meaningful way but not necessarily completely, (meta-)reviewers are asked to do the following:

  • Refrain from downgrading work based on missing one or two peripherally related papers. Just note them, helping the authors to broaden their citations.
  • Refrain from downgrading work based on not citing the reviewer’s own work, unless it really is objectively highly relevant.
  • Refrain from downgrading work based on where in a paper they address prior work. Sometimes a dedicated section is appropriate, sometimes it is not. Sometimes prior work is better addressed at the end of a paper, not at the beginning.
  • Make sure to critically note if work simply lists papers without meaningfully addressing their relevance to the paper’s questions or innovations.
  • Refrain from downgrading work based on making discoveries inconsistent with theory. The point of empirical work is to test and refine theories, not conform to them.
  • Refrain from downgrading work based on not building upon theory when there is no sufficient theory available that can be pointed out in the review. Conversely, if there is a missing and relevant theory, it should be named.
  • Refrain from downgrading work based on not using the reviewer’s interpretation of a theory. Many theories have multiple competing interpretations and multiple distinct facets that can be seen from multiple perspectives.

Criterion B: The submission describes its methods and/or innovations sufficiently for others to understand how data was obtained, analyzed, and interpreted, or how an innovation works.

An ICER paper should be self-contained in the sense that readers should be able to understand most of the key details about how the authors conducted their work or made their innovation possible. This is key for replication and meta-analysis of studies that come from positivist or post-positivist epistemologies. For interpretivist works, it is also key for what Checkland and Howell called “recoverability” (See Tracy et al. 2010 for a detailed overview for evaluating qualitative work). Reviews thus should focus on omissions of research process or innovation details that would significantly alter your judgment of the paper’s validity.

Guidelines for (Meta-)Reviewers

Since ICER papers have to adhere to a word count limit and since there are always more details a paper can describe about methods, (meta-)reviewers are asked to do the following:

  • Refrain from downgrading work based on not describing every detail.
  • Refrain from asking authors to write substantially new method details unless you can identify content for them to cut, or there is space to add those details within the length restrictions.
  • Refrain from asking authors of theory contributions for a traditional methods section; such contributions do not require them, as they are not empirical in nature.
  • Feel free to ask authors for minor revisions that would support replication or meta-analysis for positivist or post-positivist works, and recoverability for interpretivist works using qualitative methods.

Criterion C: The submission’s methods and/or innovations soundly address its research questions.

The paper should answer the questions it poses, and it should do so with rigor, broadly construed. This is the single most important difference between research papers and other kinds of knowledge sharing in computing education (e.g., experience reports), and the source of certainty researchers can offer. Note that soundness is relative to claims. For example, if a paper claims to have provided evidence of causality, but its methods did not do that, that would be grounds for critique. But if a paper only claimed to have found a correlation, and that correlation is a notable discovery that future work could explain, downgrading it for not demonstrating causality would be inappropriate.

Guidelines for (Meta-)Reviewers

Since soundness is relative to claims and methods, (meta-)reviewers are asked to do the following:

  • Refrain from applying criteria for quantitative methods to qualitative methods (e.g., critiquing a case study for a “small N” makes no sense; that is the point of a case study).
  • Refrain from downgrading work based on a lack of a statistically significant difference if the study demonstrates sufficient power to detect a difference. A lack of difference can be discovery, too.
  • Refrain from asking for the paper to do more than it claims if the demonstrated claims are sufficiently publishable (e.g., “I would publish this if it had also demonstrated knowledge transfer”).
  • Refrain from relying on inexpert, anecdotal judgments (e.g., “I don’t know much about this but I played with it once and it didn’t work”).
  • Refrain from assuming that because a method has not been used in computing education literature that it is not standard somewhere else. The field draws upon methods from many communities. Look for evidence that the method is used elsewhere.

Criterion D: The submission advances knowledge of computing education by addressing (possibly novel) questions that are of interest to the computing education community.

A paper can meet the previous criteria and still fail to advance what we know about the phenomena. It is up to the authors to convince you that the discoveries advance our knowledge in some way, e.g., by confirming uncertain prior work, adding a significant new idea, or making progress on a long-standing open question. Secondarily, there should be someone who might find the discovery interesting. It does not have to be interesting to a particular reviewer, and a particular reviewer does not have to be absolutely confident that an audience exists. As the PC cannot possibly reflect the broader audience of all readers, a probable audience is sufficient for publication.

Guidelines for (Meta-)Reviewers

Since advances can come in many forms, there are many criticisms that are inappropriate in isolation (if, however, many of these apply, they may justify rejection), and, thus, (meta-)reviewers are asked to do the following:

  • Refrain from downgrading work because another, single paper was already published on the topic. Discoveries accumulate over many papers, not just one.
  • Refrain from downgrading work that contributes a really new idea for not yet having everything figured out about it. Again, new discoveries may require multiple papers.
  • Refrain from downgrading work because the results do not appear generalizable or were only obtained at a specific institution. Many papers explicitly discuss such limitations and possible remedies. Also, generalizability takes time, and, by their very nature, some qualitative methods do not lead to generalizable results.
  • Refrain from downgrading work based on “only” being a replication. Replications, if done with diligence, are important.
  • Refrain from downgrading work based on investigating phenomena you personally do not like (e.g., “I hate object-oriented languages, this work does not matter”).

Criterion E: Discussion of results clearly summarizes the submission’s contributions beyond prior work and its implications for research and practice.

It is the authors’ responsibility to help interpret the significance of a paper’s discoveries. If it makes significant advances, but does not explain what those advances are and why they matter, the paper is not ready for publication. That said, it is perfectly fine if you disagree with the paper’s interpretations or implications. Readers will vary on what they think a discovery means or what impact it might have on the world. All that is necessary is that the work presents some reasonably sound discussion of one possible set of interpretations.

Guidelines for (Meta-)Reviewers

Because there is no single “right” interpretation or discussion of implications, (meta-)reviewers are asked to do the following

  • Refrain from downgrading work because you do not think the idea would work in your institution.
  • Refrain from downgrading work because you think that the impact is limited. Check the discussion of limitations and threats to validity and evaluate the paper with respect to the claims made.
  • Make sure to critically note if work makes interpretations that are not grounded in evidence or proposes implications that are not grounded in evidence.

Criterion F: The submission is written clearly enough to publish.

Papers need to be clear and concise, both to be comprehensible to diverse audiences, but also to ensure the community is not overburdened by verboseness. We recognize that not all authors are fluent English writers; if, however, the paper requires significant editing to be comprehensible to fluent English readers, or it is unnecessarily verbose, it is not yet ready for publication.

Guidelines for (Meta-)Reviewers

Since submissions should be clear enough, (meta-)reviewers are asked to do the following:

  • Refrain from downgrading work based on having easily fixed spelling and grammar issues.
  • Refrain from downgrading a sufficiently clear paper because it could be clearer. All writing can be clearer in some way.
  • Refrain from downgrading work based on not using all of the available word count. It is okay if a paper is short but significant.
  • Refrain from asking for more detail unless you are certain there is space or - if there is not space - you can provide concrete suggestions for what to cut.

Summary: Based on the criteria above, this paper should be published at ICER.

Based on all of the previous criteria, decide how strongly you believe the paper should be accepted or rejected, assuming authors make any modest, straightforward minor revisions you and other reviewers request before publication. Papers that meet all of the criteria should be strongly accepted (though this does not imply that they are perfect). Papers that fail to meet most of the criteria should be strongly rejected. Each paper should be reviewed independently of others, as if it were a standalone journal submission. There are no conference presentation “slots”; there is no target acceptance rate. Neither should be a factor in reviewing individual submissions.

Guidelines for (Meta-)Reviewers

Because each paper should be judged on its own, (meta-)reviewers are asked to do the following:

  • Refrain from recommending to accept a paper because it was the best in your set. It is possible that none of your papers sufficiently meet the criteria.
  • Refrain from recommending to reject a paper because it should not take up a “slot”. The PC chairs will devise a program for however many papers sufficiently meet the criteria, whether that is 5 or 50. There is no need to preemptively design the program through your review; focus on the criteria.

9. Award Recommendations

On the review form, reviewers may signal to the meta-reviewer and PC chairs that they believe the submission should be considered for a best paper award. Selecting this option in the review form is visible to the other (meta-)reviewers as part of your review, but it is not disclosed to the authors. Reviewers should recognize papers that best illustrate the highest standards of computing education research, taking into account the quality of its questions asked, methodology, analysis, writing, and contribution to the field. This includes papers that meet all of the review criteria in exemplary ways (e.g., research that was particularly well designed, executed, and communicated), or papers that meet specific review criteria in exemplary ways (e.g., discoveries are particularly significant or sound). The meta-review form for each paper includes an option to officially nominate a paper to the Awards Committee for the best-paper award. Reviewers may flag papers for award consideration during review, but meta-reviewers are ultimately responsible for nominating papers for the best paper award. Each meta-reviewer may nominate at most two papers for the best paper award. Nominated papers may or may not have been flagged by one or more reviewers. Nominations should be recorded in HotCRP and be accompanied by a paragraph outlining the rationale for nomination. NOTE: Whether a paper has been nominated and the accompanying rationale are not disclosed to the authors as part of the meta-review.
Meta-reviewers are encouraged to review and finalize their nominations at the conclusion of the SPC meeting to allow for possible calibration. Once paper decisions have been sent, the submission chair will make PDFs and the corresponding rationales for all nominated papers available to the Awards Chair. Additionally, a list of all meta-reviewers that have handled any nominated paper or have one or more conflicts of interest with any nominated paper will be disclosed to the Awards Chair, as those members are not eligible to serve on the Awards Committee.

10. Possible Plagiarism, Misrepresentation, and Falsification

If after reading a submission, you suspect that it has in some way plagiarized from some other source, do the following:

The chairs will investigate and decide as necessary prior to the acceptance notification deadline. You should not mark the paper for rejection based on suspected plagiarism. Mark it based on the paper as it stands, while the PC chairs investigate.

11. Practical Suggestions for Writing Reviews

The following suggestions may be helpful when reviewing papers:

  1. Before reading, remind yourself of the preceding reviewing criteria.
  2. Read the paper, and as you do, note positive and negative aspects for each of the preceding reviewing criteria.
  3. Use your notes to outline a review organized by the seven criteria, so authors can understand your judgments for each criterion.
  4. Draft your review based on your outline.
  5. Edit your review, making it as constructive and clear as possible. Even a very negative review should be respectful to the author(s), helping to educate them. Avoid comments about the author(s) themselves; focus on the document.
  6. Based on your review, choose scores for each of the criteria.
  7. Based on your review and scores, choose a recommendation score and decide whether to recommend the paper for consideration for a best paper award.

Thank you very much for reading this document and thank you very much for being part of the ICER reviewing process. Do not hesitate to email the Program Co-Chairs at pc-chairs@icer.acm.org if you have any questions.