Micro 2012 Program Chair s Remarks. Onur Mutlu PC Chair December 3, 2012 Vancouver, BC, Canada

Similar documents
EUREKA Peter Lalvani Data & Impact Analyst NCP Academy CSIC Brussels 18/09/17

Report on the Technical Track of ICSE 2017

The industrial competitiveness of Italian manufacturing

May 4, Carbon Sequestration Leadership Forum. Mission Innovation Carbon Capture Challenge Update

Compensation. Benefits. Expatriation.

IEEE ISCAS Bid Packet Requirements and Timeline

OECD Webinar on alternatives to long chain PFCs Co-organized with the Stockholm Convention Secretariat 18 April 2011

17th Annual Computer Security Applications Conference. Jeremy Epstein ACSAC Program Chair webmethods, Inc. (703)

THE NATIONAL INVESTMENT IN RESEARCH. Professor Vicki Sara Chair, Australian Research Council

Labor Market Openness, H-1B Visa Policy, and the Scale of International Student Enrollment in the US

FPT University of Vietnam Scholarships

International Recruitment Solutions. Company profile >

ASM. Common Operations Failure Modes in the Process Industries International Symposium. Dr. Peter Bullemer Human Centered Solutions

ehealth Ireland Ecosystem members of the ECHAlliance International Ecosystem Network

The G200 Youth Forum 2015 has 4 main platforms which will run in tandem with each other:

Sponsored by Supported by Presented by

E-Seminar. Teleworking Internet E-fficiency E-Seminar

Q Manpower. Employment Outlook Survey Global. A Manpower Research Report

1 Introduction to ITC-26. Introduction to the ITC and DEPO. October 24 November 11, 2016 Albuquerque, New Mexico, USA Greg Baum

Q Manpower. Employment Outlook Survey Global. A Manpower Research Report

Miguel Pérez, Ph.D. Chairman Chamber of Chilean IT Companies. Asociación Chilena de Empresas de Tecnologías de Información A.G.

The EUREKA Initiative An Opportunity for Industrial Technology Cooperation between Europe and Japan

Brokerage for the first ProSafe Call Dina Carrilho Call Secretariat Foundation for Science and Technology (FCT), Portugal

Measuring Digital Maturity. John Rayner Regional Director 8 th June 2016 Amsterdam

CALL FOR APPLICATIONS FOR STATE SCHOLARSHIPS IN HUNGARY 2018/2019

Study Abroad at Carnegie Mellon University Academic Year Office of International Education

Study Abroad Opportunities

MISC PMT. Should Mean More Than Just an Anagram. Mike Konrad Software Engineering Institute Carnegie Mellon University. September 18, 2006

BE MOBILE! > L AUNCH BREAK < FROM 15 TH TO 30 TH NOVEMBER THE PROFESSORS PROMOTING PRESENT PARTNER SCHOOLS

OPCW UN JOINT MISSION IN SYRIA

Request for Proposal REQUEST FOR PROPOSAL

Options for Attracting Research Students to Australia

Q Manpower. Employment Outlook Survey Global. A Manpower Research Report

RELAUNCHED CALL FOR APPLICATIONS FOR STATE SCHOLARSHIPS IN HUNGARY 2017/2018

ENGINEERING RESEARCH CENTERS 2016 END-OF-YEAR SLIDES

Best Private Bank Awards 2018

EUREKA and Eurostars: Instruments for international R&D cooperation

MEASURING R&D TAX INCENTIVES

UNCLASSIFIED. R-1 ITEM NOMENCLATURE PE F: International Activities

OPCW UN JOINT MISSION IN SYRIA

INNOVATION & ECONOMIC GROWTH: RATIONALES FOR A NATIONAL INNOVATION STRATEGY

Engage students looking for. the best. university match. all around the world

7 th Model ASEM in conjunction with the 11 th ASEM Summit (ASEM11) 20 Years of ASEM: Partnership for the Future through Connectivity

EXPORT PERFORMANCE MONITOR

Program Chair s Message

Equal Distribution of Health Care Resources: European Model

Higher Education May 2017 INTERNATIONAL FACTS AND FIGURES

Q Manpower. Employment Outlook Survey Global. A Manpower Research Report

EUREKA Innovation across borders

THE NICHOLAS HALL OTC MARKETING AWARDS 2018

THE RIGHT PLACE THE RIGHT TIME THE RIGHT PEOPLE

Going Global Country Career Guide and USA/Canada City Career Guide Combined Premium Collection USER S GUIDE

Regional Round Table Event. Presentation Materials

Dietitians-nutritionists around the World

Q Manpower. Employment Outlook Survey New Zealand. A Manpower Research Report

NRF Funding Opportunities

Santander International Scholarships Application Form

National scholarship programme for foreign students, researchers and lecturers SCHOLARSHIP FOR STUDIES IN HIGHER EDUCATION INSTITUTION Guidelines 2018

Young scientist competition 2016

Country Requirements for Employer Notification or Approval

Part 1 - Registering

Tier 4 visa application guidance applying outside the UK (entry clearance)

THE NICHOLAS HALL OTC MARKETING AWARDS 2018

FOREIGN DIRECT INVESTMENT

DELIVERING THE GLOBAL BUSINESS ELITE AUDIENCE

Volume to Value Based Healthcare Dr. Thilo Koepfer, VP International, 3M Health Information Systems

BRIDGING GRANT PROGRAM GUIDELINES 2018

Importance of Export Control & Japan s Export Control

ManpowerGroup Employment Outlook Survey Global

ManpowerGroup Employment Outlook Survey Global

Exploiting International Life Science Opportunities. Dafydd Davies

Manpower Q Employment Outlook Survey Global

Manpower Employment Outlook Survey India. A Manpower Research Report

Q Manpower. Employment Outlook Survey India. A Manpower Research Report

Linking Researchers with their Research: Persistent identifiers, registries, and interoperability standards

Characteristics of Specialty Occupation Workers (H-1B): October 1999 to February 2000 U.S. Immigration and Naturalization Service June 2000

Randstad Workmonitor, results 1 st quarter 2017 Entrepreneurship is considered attractive, but risk of failure is also great

Productivity, Globalisation, and Sustainable Growth

Q Manpower. Employment Outlook Survey Global. A Manpower Research Report

Post Show Report. w w w. e x p o m e d i s t a n b u l. c o m

Manpower Employment Outlook Survey

THE ARMS TRADE TREATY REPORTING TEMPLATE

quarter 2018 results.

Contributions (US Dollars) Run date: 30 January 2009

"CREATING AN INTELLIGENT CITY - THE GLOBAL ICF EXPERIENCE"

African Union Kwame Nkrumah Scientific Awards

5.U.S. and European Museum Infrastructure Support Program

RULES & REGULATION Porto, December 2017

SPONSORSHIP OPPORTUNITIES

Remaking Health Care in America

CM-CIC La Bourse rencontre l informatique

Innovation and Economic Prosperity

Knowledge Based Capital. in a Company. Stefan Dobrev OECD 13 th February Innovation Sweet-spot

Personnel. Staffing of the Agency's Secretariat. Report by the Director General

SPONSORSHIP OPPORTUNITIES

SPONSORSHIP OPPORTUNITIES

International ICT data collection, dissemination and challenges

The What, Who and How of the Partnership for Market Readiness (PMR)

Enlightening the Future Scope of Magnetism and Magnetic Materials. International Conference on. Magnetism and Magnetic. Materials

Entrepreneurship Education Around the World Tina L. Seelig Stanford Technology Ventures Program

Transcription:

Micro 2012 Program Chair s Remarks Onur Mutlu PC Chair December 3, 2012 Vancouver, BC, Canada 1

Purpose of This Session Provide insight and transparency into the Micro-45 paper submission and selection process Provide statistics and some informal analyses Describe some new things we tried Get your feedback involve the community Enable better future Micro conferences Hopefully, enable the institution of best practices 2

Basic Process and Statistics 228 submissions; 40 papers accepted (17.5% Acc. Rate) Reviewed by The 44-member Program Committee The 106-member External Review Committee (fully involved) At least 98 more external/aiding reviewers Review process 1325 total reviews, 815 from PC members; 5.8 reviews/paper Rebuttal period to allow for author response Extensive online discussions (160 papers) 1.5-day PC meeting on August 25-26, 2012 All PC members present the entire first day After decision All papers shepherded 3

More Information on the Process Now Program Chair s Remarks Statistics on submitted and accepted papers Paper selection process Metrics used to rank papers for discussion Best paper selection process New things we tried this year What worked and what did not? What can we do better? Message from the Program Chair (in your proceedings) http://www.microsymposia.org/micro45/message-from-theprogram-chair.pdf Feedback very much welcome I would appreciate all kinds of honest feedback 4

Our Goals in New Things Tried Improve quality, transparency; institute best practices 1. Quality of the reviews and paper selection process 2. Transparency of the selection process 3. Strong involvement of external reviewers and community 4. Authors ability to respond to initial reviewer evaluations and potential questions that may come up after rebuttal 5. Quality of the final program and final versions 6. Quality of interactions during conference (especially in the presence of parallel sessions) 5

The Life of a Micro Submission/Paper Submission process Review process Rebuttal process Post rebuttal online discussion and re-scoring PC meeting Post-PC meeting During conference After conference ad infinitum 6

Paper Submission Process Double-blind submission and review process Conflicts marked by authors Submission format the same as the final format; no extra page charges Main goal: eliminate fairness issues Mitigates the concern Does this paper fit in the final format? Could be done better in the future We did: 12-page 10-pt submissions, 12-page 9-pt final format Better: 11-page 9pt submissions, 12-page 9pt final format. 7

MulG- core systems, mulgprocessors Memory systems Energy and power- efficiency Core/processor microarchitecture Interconnects and communicagon mechanisms Reliability, fault tolerance, variability Emerging technologies System architecture Modeling, measurement, and simulagon methods Data parallelism; SIMD Parallel programming; parallelizagon Hardware/soMware codesign or cooperagon Special- purpose systems, accelerators Graphics and GPUs Compilers Circuit- architecture interacgon and codesign Hardware/soMware interface Thread- level parallelism Workload characterizagon Embedded systems Reconfigurable systems Data center, cloud, cluster systems InstrucGon level parallelism System somware (OS, virtualizagon) support Real system evaluagon and analysis Programmer producgvity; debugging Dynamic program opgmizagon Mobile systems Storage Security Quality of service I/O (input/output) Number of Papers Accepted Submi^ed 0 10 20 30 40 50 60 70 80 90 8 100

Origin of Papers Number of Papers 180 160 140 120 100 80 60 40 20 Submi^ed Accepted 0 Academia only Industry only Both 9

Main Country of Papers USA China Spain Canada Korea France Switzerland Sweden Cyprus UK Israel Taiwan Singapore Iran Belgium PalesGne Japan Italy India Germany Chile Brazil Accepted Submi^ed 0 20 40 60 80 100 120 140 160 180 Number of Papers 10

Number of Authors Per Paper 70 60 Submi^ed Accepted Number of Papers 50 40 30 20 10 0 1 2 3 4 5 6 7 8 9 10 11

Optional Information with Submission Authors able to provide an appendix Peripheral material that can aid reviewers, such as full proofs of theorems, details of the experiments conducted, or more experimental results Main goal: Satisfy reviewer curiosity; handle out-of-scope issues Authors able to say this paper was submitted to a past conference Authors able to provide past reviews and responses to them (if the paper was submitted to a previous venue and rejected) Main goal: proactive addressing of potential concerns that can come up post-rebuttal Anticipate and address I had reviewed this paper previously and the authors did 12

Papers with an Appendix 250 Total Papers with an Appendix Number of Papers 200 150 100 50 0 Submi^ed Accepted 13

Papers with Past Reviews 250 Total Papers with Past Reviews Number of Papers 200 150 100 50 0 Submi^ed Accepted 14

Papers with Submi^ed to Past Conference Checked 250 Total Papers with Submi^ed to Past Conference Checked Number of Papers 200 150 100 50 0 Submi^ed Accepted 15

PC- Authored papers 250 Total PC- Authored Papers Number of Papers 200 150 100 50 0 Submi^ed Accepted 16

Review Process Reviewers The 44-member Program Committee The 106-member External Review Committee (fully involved) At least 98 more external/aiding reviewers Review form explicitly asked for three specific rebuttal questions External reviewers fully involved in the process, up until the PC meeting 17

Number of Reviews 1325 total reviews 815 from PC members 510 from external reviewers All reviewers assigned by me 5.8 reviews/paper 216 papers received >= 5 reviews 159 papers received >= 6 reviews 44 papers received >= 7 reviews 18

Reviews Categorized by ExperGse Score Accepted Submi^ed Expert in the subject area of the paper Knowledgeable, but not an expert in the subject area Some familiarity with the subject area Li^le familiarity with the subject area 0 100 200 300 400 500 600 700 Number of Reviews 19

Reviews Categorized by Novelty Score Accepted Submi^ed Surprisingly new contribugon New contribugon Incremental improvement Very incremental improvement Done before (not necessarily published) Published before 0 100 200 300 400 500 600 Number of Reviews 20

Can We Publish Your Review? Number of Reviews 800 700 600 500 400 300 200 100 0 Yes Submi^ed Accepted No 21

Rebuttal Process Authors chance to respond to reviewer concerns after initial evaluation by reviewers Authors were able to see all initial evaluation scores and all comments of reviewers All reviewers were required to read and update scores and reviews based on rebuttal A majority of reviewers updated their reviews and scores, and provided post-rebuttal comments At least 160 papers discussed extensively online post rebuttal External reviewers fully involved in the discussion 22

Overall Merit Score Categories 23

Pre- and Post-Rebuttal OM Scores Pre-rebuttal score is the reviewer s score at the time the rebuttal is exposed to authors Post-rebuttal score is the overall merit score at the time the entire review process is over. Affected by: Authors rebuttal Online discussion among reviewers Reading of other reviews Discussion during the PC meeting i.e., everything that happens after rebuttal 24

DistribuGon of Review Scores Received by Submi^ed Papers Pre- rebu^al scores Post- rebu^al scores Number of Reviews 500 450 400 350 300 250 200 150 100 50 0 Very poor Poor Average Good Very good Excellent 25

DistribuGon of Review Scores Received by Accepted Papers 140 120 Pre- rebu^al scores Post- rebu^al scores Number of Reviews 100 80 60 40 20 0 Very poor Poor Average Good Very good Excellent 26

Magnitude of Change in Average OM Score AMer Rebu^al vs. Acceptance 120 100 Number of Papers 80 60 40 20 Rejected Accepted 0 Lowered by more than 1 Lowered by 0.5 to 1 Lowered by up to 0.5 No change Raised by up to 0.5 Raised by 0.5 to 1 27

Score Changes for Papers w/ Avg. Pre-Rebuttal OM 1 1.5 4 Rejected Number of Papers 3 2 1 Accepted 0 Lowered by 1 No Change Raised by 1 28

Score Changes for Papers w/ Avg. Pre-Rebuttal OM 1.5 2 Number of Papers 3.5 3 2.5 2 1.5 1 0.5 0 Rejected Accepted 29

Score Changes for Papers w/ Avg. Pre-Rebuttal OM 2 2.5 Number of Papers 16 14 12 10 8 6 4 2 0 Rejected Accepted 30

Score Changes for Papers w/ Avg. Pre-Rebuttal OM 2.5 3 Number of Papers 40 35 30 25 20 15 10 5 0 Rejected Accepted 31

Score Changes for Papers w/ Avg. Pre-Rebuttal OM 3 3.5 Number of Papers 35 30 25 20 15 10 5 0 Rejected Accepted 32

Score Changes for Papers w/ Avg. Pre-Rebuttal OM 3.5 4 Number of Papers 30 25 20 15 10 5 Rejected Accepted 0 33

Score Changes for Papers w/ Avg. Pre-Rebuttal OM 4 4.5 Number of Papers 8 7 6 5 4 3 2 1 0 Rejected Accepted Lowered by 1 No Change Raised by 1 34

Score Changes for Papers w/ Avg. Pre-Rebuttal OM 4.5 5 Number of Papers 2 1 Rejected Accepted 0 Lowered by 1 No Change Raised by 1 35

Paper Overall Score DistribuGon, Pre- and Post- Rebu^al Number of Papers 60 50 40 30 20 10 Post- Rebu^al Pre- Rebu^al Rejected Accepted 0 Average Overall Merit 36

Paper Avg. Novelty Score DistribuGon Number of Papers 80 70 60 50 40 30 20 10 0 Rejected Accepted 37

Paper Avg. ExperGse Score DistribuGon Number of Papers 120 100 80 60 40 Rejected Accepted 20 0 38

Paper Avg. Importance Score DistribuGon Number of Papers 90 80 70 60 50 40 30 20 10 0 Rejected Accepted 39

Paper Avg. WriGng Quality Score DistribuGon Number of Papers 80 70 60 50 40 30 20 Rejected Accepted 10 0 40

Paper Avg. Soundness of Ideas Score DistribuGon Number of Papers 100 90 80 70 60 50 40 30 20 10 0 Rejected Accepted 41

Paper Avg. Soundness of EvaluaGon Score DistribuGon Number of Papers 90 80 70 60 50 40 30 20 10 0 Rejected Accepted 42

Program Committee Meeting 1.5-day meeting Main goal: avoid hasty decisions. Improve decision quality. Extra day enables mulling over decisions. More time allows for more and higher-quality discussions. No lost PC members. August 25-26, Hilton O Hare airport All PC members present the entire first day 42/44 PC members present the second day 82 papers discussed in (somewhat of a) rank order Not average overall merit score rank Goal: Consensus across the entire PC for accept/reject decision Entire PC voted when consensus was not clear 43

Metrics for Paper Discussion Order (I) Determined after many different ratings are considered Average Overall Merit (OM) Average OM weighted with expertise (2 ways) Average OM weighted with reviewer generosity Average OM weighted with reviewer expertise and generosity (2 ways) Total 6 OM metrics per paper 6 different versions of each OM metric [pre-rebuttal, post-rebuttal] x [all reviewers, reviewers with expertise > 1, reviewers with expertise > 2] 36 metrics and ranks for each paper A kitchen-sink metric averages all these A correlation analysis would be interesting 44

Metrics for Paper Discussion Order (II) No single best metric Plus, many things metrics cannot capture We took all the rankings with a large grain of salt Final discussion order determined by examining all metrics and each paper individually 9 groups of papers formed; 7 of which were discussed Bottom line: Top 72 papers in terms of post-rebuttal rank (with nonexperts excluded) ended up on the discussion list +7 more with at least one A score but lower averages +3 more papers brought up by reviewers 45

New Metrics to Rank Papers 46

Problem with Average Overall Merit All scores are given the same weight Varying expergse Different reviewers have different expergse Varying generosity Some reviewers more harsh/generous than others 47

Expert Mean Score Taking expergse of each reviewer into account Idea: Give more weight to scores from experts Weight each score by expergse Expert Mean of a Paper = Expertise *Score reviews reviews Expertise score refers to the overall merit 48

Generosity Mean Score Take generosity of each reviewer into account Idea: A review with a low score compared to other reviews for the paper is considered less generous Score by the reviewer Generosity of a Reviewer = Mean (all papers) Mean score for the paper Review score Generosity scoreof a Review = Generosity of the reviewer Generosity Mean of a Paper = (all reviews) Generosity score of Number of reviews the review 49

Expert Generosity Take expergse and generosity into account Idea: An expert review with low score is likely not harsh compared to a non- expert review with high score Two flavors of expert- generosity Pre- expert generosity Post- expert generosity 50

Pre- expert Generosity Mean Account for generosity before expergse Similar to expert mean Use generosity score instead of overall merit Expert Mean of a Paper = Expertise *Score reviews reviews Expertise Pre - expert Generosity Mean = Expertise* Generosity Score reviews reviews Expertise 51

Post- expert Generosity Mean Account for expergse before generosity Expert - generosity of a Reviewer = Score by the reviewer Mean papers) Expert mean score (all Expert - generosity scoreof a Review = Review score Expert - generosity of the reviewer Post - expert Generosity Mean = (all reviews) Expert-generosity score of Number of reviews the review 52

Case Study (for Generosity) Paper 1 (Good) Paper 2 (Average) Paper 3 (Bad) Reviewer 1 (Generous) Reviewer 2 (Neutral) Reviewer 3 (Harsh) Mean Average Score Mean Generosity Score 6 3 4.5 4.57 5 3 4 3.38 Generosity 1.29 1.04 0.67 2 1 1.5 1.71 Accounting for generosity increases the gap between the good paper that received a harsh review AND the average paper that received a generous review. 53

Post PC-Meeting Summary statements for discussed and rejected papers Written by myself or an expert PC member Goal: provide insight/visibility for authors into the discussion of the paper in the PC meeting Provide major reasons for rejection. Can be done more methodically in the future: assign a scribe for each paper All papers shepherded Goal: improve quality of final program Did achieve its purpose in many cases 54

During Conference Goal: Improve exposure of papers in the presence of parallel sessions Enable better technical interactions Enable better Best * decisions Lightning session Poster session 55

Best Paper Award Process 8 papers chosen as candidates based on Rankings (based on 36 different metrics) PC member vetting Reviewer vetting 7-person Best Paper Award Committee Not conflicted with any of the 8 candidate papers Can select another paper as the winner Can select 0-N papers The same committee will select the Best Lightning Session Presentation and Best Poster Award winners Sessions to aid the selection process 56

Feedback Any type of feedback on any part of the process is very much appreciated Goal: Living document that continually improves the process and institutes best practices Methods of providing feedback Online feedback form http://www.surveymonkey.com/s/z8j6fxt In person Via email Snail mail! 57

Micro-45 Survey Link on Website www.microsymposia.org/micro45 58

Thanks Literally hundreds of people PC, ERC, Steering Committee All reviewers All submitting authors All presenters All attendees Many others (see my note) Strong and thriving community effort 59

Thanks These slides were prepared in part by Vivek Seshadri, Carnegie Mellon Chris Fallin, Carnegie Mellon Justin Meza, Carnegie Mellon 60

Thank You. Onur Mutlu PC Chair December 3, 2012 Vancouver, BC, Canada 61

Vancouver Aquarium Tonight Buses leave from 6:45pm to 7:45pm, return from 10pm to midnight. Extra Tickets Available 62

63

Micro 2012 Program Chair s Remarks Onur Mutlu PC Chair December 3, 2012 Vancouver, BC, Canada 64

Additional Data 65

Reviews Categorized by Magnitude of Score Change Number of Reviews 1200 1000 800 600 400 200 0 Submi^ed Accepted 66

Reviews Categorized by Magnitude of Score Change (Detail) Number of Reviews 200 180 160 140 120 100 80 60 40 20 0 Submi^ed Accepted 215 1003 67

Papers Categorized by Post- Rebu^al Avg. Score Change DirecGon 140 120 Number of Papers 100 80 60 40 20 0 Moved up Moved down No change 68

Papers Categorized by Post- Rebu^al Avg. Score Change DirecGon 140 120 Accepted Rejected 3.94 Number of Papers 100 80 60 40 20 2.87 1.97 4.37 3.97 2.72 0 Papers Whose Score Moved Up Papers Whose Score Moved Down Papers Whose Score Did Not Change (number on top indicates average pre- rebu^al score across all papers in the respecgve group) 69