-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
405 lines (330 loc) · 23.4 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
<!DOCTYPE html>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>SpLU 2020 by </title>
<link rel="stylesheet" type="text/css" href="stylesheets/normalize.css" media="screen">
<link
href='http://fonts.googleapis.com/css?family=Open+Sans:400,700'
rel='stylesheet' type='text/css'>
<link rel="stylesheet"
href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css"
integrity="sha384-1q8mTJOASx8j1Au+a5WDVnPi2lkFfwwEAa8hDDdjZlpLegxhjVME1fgjWPGmkzs7"
crossorigin="anonymous">
<link rel="stylesheet" type="text/css" href="stylesheets/stylesheet.css" media="screen">
<link rel="stylesheet" type="text/css"
href="stylesheets/github-light.css" media="screen">
<!-- Latest compiled and minified CSS -->
</head>
<body>
<section class="page-header">
<h1 class="project-name">SpLU 2020</h1>
<h2 class="project-tagline">Third International Workshop on Spatial Language Understanding</h2>
<h2 class="project-tagline">In conjunction with The Conference on Empirical Methods in Natural Language Processing 2020 <a href="http://www.wikicfp.com/cfp/program?id=883">(EMNLP 2020)</a>.
<br><br>
Date: November 19, 2020
<!--<br><br>
<a href="https://www.aclweb.org/anthology/volumes/2020.splu-1/"> (Proceedings)</a>
-->
<br><br>
Join the virtual SpLU event <a href="https://virtual.2020.emnlp.org/workshop_WS-10.html"> here</a>.
<br><br>Please note all the talks will be played on Zoom <a href="https://us02web.zoom.us/j/9514624887">here</a>.<br><br>
<a href="https://www.aclweb.org/anthology/volumes/2020.splu-1/"> Proceedings</a>
</h2>
<br>
</section>
<section>
<!-- Static navbar -->
<nav class="navbar navbar-default">
<div class="container-fluid">
<div class="navbar-header">
<butfton type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar" aria-expanded="false" aria-controls="navbar">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
</div>
<div id="navbar" class="navbar-collapse collapse">
<ul class="nav navbar-nav">
<li class="active"><a href="#topics">Topics</a></a></li>
<li><a href="#invitedSpeakers">Invited Speakers</a></li>
<li><a href="#schedule">Schedule</a></li>
<li><a href="#accepted-papers">Accepted Papers</a></li>
<!--<li><a href="#submission-info">Submission</a></li>-->
<li><a href="#important-dates">Important Dates</a></li>
<!--li><a href="#demos">Demos</a></li-->
<!--li><a href="#panel">Panel</a></li-->
<!--li><a href="#submission-info">Submission</a></li-->
<li><a href="#organizers">Organizers</a></li>
<li><a href="#program-commitee">Program Committee</a></li>
</ul>
</div><!--/.nav-collapse -->
</div><!--/.container-fluid -->
</nav>
</section>
<section class="main-content">
<h2>Overview</h2>
<p>
Leveraging the foundation built in the prior workshops SPLU-RoboNLP 2019 and SpLU-2018 and focusing on the gaps identified therein, we propose the third workshop on Spatial Language Understanding. One of the essential functions of natural language is to express spatial relationships between objects. Spatial language understanding is useful in many research areas and real-world applications including robotics, navigation, geographic information systems, traffic management, human-machine interaction, query answering and translation systems. Compared to other semantically specialized linguistic tasks, standardizing tasks related to spatial language seem to be more challenging as it is harder to obtain an agreeable set of concepts and relationships and a formal spatial meaning representation that is domain independent and that allows quantitative and qualitative reasoning.
This has made research results on spatial language learning and reasoning diverse, task-specific and, to some extent, not comparable. Attempts to arrive at a common set of basic concepts and relationships as well as making existing corpora inter-operable, however, can help avoid duplicated efforts within as well as across fields and instead focus on further developments in the respective fields for automatic learning and reasoning. Existing qualitative and quantitative representation and reasoning models can be used for investigation of interoperability of machine learning and reasoning over spatial semantics. Research endeavors in this area could provide insights into many challenges of language understanding in general. Spatial semantics is also very well-connected and relevant to visualization of natural language and grounding language into perception, central to dealing with configurations in the physical world and motivating a combination of vision and language for richer spatial understanding. In the third round of the SpLU workshop, we will focus on the same major topics as:</p>
<ol>
<li>Spatial language meaning representation (continuous, symbolic)</li>
<li>Spatial language learning</li>
<li>Spatial language reasoning</li>
<li>Spatial Language Grounding and Combining vision and language</li>
<li>Applications of Spatial Language Understanding: QA, dialogue systems, Navigation, etc.</li>
</ol>
<p>
Spatial language meaning representation includes research related to cognitive and linguistically motivated spatial semantic representations, spatial knowledge representation and spatial ontologies, qualitative and quantitative representation models used for formal meaning representation, spatial annotation schemes and efforts for creating specialized corpora. Spatial language learning considers both symbolic and sub-symbolic (with continuous representations) techniques and computational models for spatial information extraction, semantic parsing, spatial co-reference within a global context that includes discourse and pragmatics from data or formal models. For the reasoning aspect, the workshop emphasizes the role of qualitative and quantitative formal representations in helping spatial reasoning based on natural language and the possibility of learning such representations from data; and whether we need these formal representations to support reasoning or there are other alternative ideas. For the multi-modality aspect, answers to questions such as the following will be discussed: (1) Which representations are appropriate for different modalities and which ones are modality independent? (2) How can we exploit visual information for spatial language learning and reasoning? All related applications are welcome, including text to scene conversion, spatial and visual question answering, spatial understanding in multi-modal setting for robotics and navigation tasks and language grounding. The workshop aims to encourage discussions across fields dealing with spatial language along with other modalities. The desired outcome is identification of shared as well as unique challenges, problems and future directions across the fields and various application domains related to spatial language understanding.
</p>
<p>The specific topics include but are not limited to: </p>
<ul>
<li> Spatial meaning representations, continuous representations, ontologies, annotation schemes, linguistic corpora</li>
<li>Spatial information extraction from natural language</li>
<li>Spatial information extraction in robotics, multi-modal environments, navigational instructions</li>
<li>Text mining for spatial information in GIS systems, geographical knowledge graphs</li>
<li>Spatial question answering, spatial information for visual question answering</li>
<li>Quantitative and qualitative reasoning with spatial information</li>
<li>Spatial reasoning based on natural language or multi-modal information (vision and language)</li>
<li>Extraction of spatial common sense knowledge</li>
<li>Visualization of spatial language in 2-D and 3-D</li>
<li>Spatial natural language generation</li>
<li>Grounded spatial language and dialog systems</li>
</ul>
<a id="invitedSpeakers" class="anchor" href="#invitedSpeakers" aria-hidden="true"><span class="octicon octicon-link"></span></a><h2>Invited Speakers</h2>
<ul>
<li>
<font size="3" color="black"><b><a href="http://jamespusto.com">James Pustejovsky</a>, Brandeis University. <a href="invited_abstracts/pustejovsky_abstract.html">Abstract</a>. <a href="invited_bios/pustejovsky_bio.html">Bio</a>.</b></font></li>
<li>
<font size="3" color="black"><b><a href="http://juliahmr.cs.illinois.edu">Julia Hockenmaier</a>, University of Illinois at Urbana-Champaign. <a href="invited_abstracts/hockenmaier_abstract.html">Abstract</a>. <a href="invited_bios/hockenmaier_bio.html">Bio</a>.</b></font></li>
<li>
<font size="3" color="black"><b><a href="https://yoavartzi.com">Yoav Artzi</a>, Cornell University. <a href="invited_abstracts/artzi_abstract.html">Abstract</a>. <a href="invited_bios/artzi_bio.html">Bio</a>.</b></font></li>
<li> <font size="3" color="black"><b><a href="https://www.ihmc.us/groups/bdorr/"> Bonnie J. Dorr</a>, Florida Institute for Human and Machine Cognition. <a href="invited_abstracts/dorr_abstract.html">Abstract</a>. <a href="invited_bios/dorr_bio.html">Bio</a>.</b></font></li>
<li>
<font size="3" color="black"><b><a href="https://douwekiela.github.io">Douwe Kiela</a>, Facebook. <a href="invited_abstracts/kiela_abstract.html">Abstract</a>. <a href="invited_bios/kiela_bio.html">Bio</a>.</b></font></li>
</ul>
<a id="schedule" class="anchor" href="#schedule" aria-hidden="true"><span class="octicon octicon-link"></span></a>
<a id="schedule" class="anchor" href="#schedule" aria-hidden="true"><span class="octicon octicon-link"></span></a><h2>Schedule (EST)</h2>
Please note all the talks will be played on Zoom <a href="https://us02web.zoom.us/j/9514624887">here</a>.<br><br>
<table class="table" style="width:100%">
<tr class="info">
<td>8:00-9:00 AM</td>
<td><strong>QA/Poster</strong></td>
<td>Workshop Organizers</td>
</tr>
<tr class="info">
<td>9:00-9:10 AM</td>
<td><strong>Opening Talk </strong></td>
<td>Parisa Kordjamshidi</td>
</tr>
<tr class="success">
<td>9:10-10:00 AM</td>
<td><strong>Invited Talk</strong></td>
<td>James Pustejovsky</td>
</tr>
<tr class="success">
<td>10:00-10:56 AM</td>
<td><strong>Paper Presentations (1,2,3,11)</strong></td>
<td></td>
</tr>
<tr>
<td>10:56-11:05 AM</td>
<td><strong>Break</strong></td>
<td> </td>
</tr>
<tr class="success">
<td>11:05-11:55 AM</td>
<td><strong>Invited Talk</strong></td>
<td>Julia Hockenmaier</td>
</tr>
<tr class="success">
<td>11:55-12:51 PM</td>
<td><strong>Paper Presentations (4,5,12,13)</strong></td>
<td></td>
</tr>
<tr>
<td>12:51-1:00 PM</td>
<td><strong>Break</strong></td>
<td> </td>
</tr>
<tr class="success">
<td>1:00-1:50 PM</td>
<td><strong>Invited Talk</strong></td>
<td>Yoav Artzi</td>
</tr>
<tr class="success">
<td>1:50-2:46 PM</td>
<td><strong>Paper Presentations (6,7,8,14)</strong></td>
<td></td>
</tr>
<tr class="info">
<td>2:46-3:45 PM</td>
<td><strong>QA/Poster</strong></td>
<td>Workshop Organizers</td>
</tr>
<tr class="success">
<td>3:45-4:35 PM</td>
<td><strong>Invited Talk</strong></td>
<td>Bonnie J. Dorr</td>
</tr>
<tr class="success">
<td>4:35-5:31 PM</td>
<td><strong>Paper Presentations (9,10,15,16)</strong></td>
<td></td>
</tr>
<tr>
<td>5:31-5:45 PM</td>
<td><strong>Break</strong></td>
<td> </td>
</tr>
<tr class="success">
<td>5:45-6:35 PM</td>
<td><strong>Invited Talk</strong></td>
<td>Douwe Kiela</td>
</tr>
<tr class="success">
<td>6:35-7:03 PM</td>
<td><strong>Paper Presentations (17,18)</strong></td>
<td></td>
</tr>
<tr class="info">
<td>7:03-8:00 PM</td>
<td><strong>Panel Discussion</strong></td>
<td></td>
</tr>
<tr class="info">
<td>8:00-9:00 PM</td>
<td><strong>QA/Poster</strong></td>
<td>Workshop Organizers</td>
</tr>
</table>
<a id="accepted-papers" class="anchor" href="#accepted-papers" aria-hidden="true"><span class="octicon octicon-link"></span></a>
<h2>Accepted Papers<a href="https://www.aclweb.org/anthology/volumes/2020.splu-1/"> (Proceedings)</a></a></h2>
<ol>
<li><b>An Element-wise Visual-enhanced BiLSTM-CRF Model for Location Name Recognition. <a href="https://www.aclweb.org/anthology/2020.splu-1.1/">Paper</a>.</b><br><i><font size=2>Takuya Komada and Takashi Inui</font></i></li>
<li><b>BERT-based Spatial Information Extraction. <a href="https://www.aclweb.org/anthology/2020.splu-1.2/">Paper</a>.</b><br><i><font size=2>Hyeong Jin Shin, Jeong Yeon Park, Dae Bum Yuk and Jae Sung Lee</font></i></li>
<li><b>A Cognitively Motivated Approach to Spatial Information Extraction. <a href="https://www.aclweb.org/anthology/2020.splu-1.3/">Paper</a>.</b><br><i><font size=2>Chao Xu, Emmanuelle-Anna Dietz Saldanha, Dagmar Gromann and Beihai Zhou</font></i></li>
<li><b>They are not all alike: answering different spatial questions requires different grounding strategies. <a href="https://www.aclweb.org/anthology/2020.splu-1.4/">Paper</a>.</b><br><i><font size=2>Alberto Testoni, Claudio Greco, Tobias Bianchi, Mauricio Mazuecos, Agata Marcante, Luciana Benotti and Raffaella Bernardi</font></i></li>
<li><b>Categorisation, Typicality and Object-Specific Features in Spatial Referring Expressions. <a href="https://www.aclweb.org/anthology/2020.splu-1.5/">Paper</a>.</b><br><i><font size=2>Adam Richard-Bollans, Anthony Cohn and Lucía Gómez Álvarez</font></i></li>
<li><b>A Hybrid Deep Learning Approach for Spatial Trigger Extraction from Radiology Reports. <a href="https://www.aclweb.org/anthology/2020.splu-1.6/">Paper</a>.</b><br><i><font size=2>Surabhi Datta and Kirk Roberts</font></i></li>
<li><b>Retouchdown: Releasing Touchdown on StreetLearn as a Public Resource for Language Grounding Tasks in Street View. <a href="https://www.aclweb.org/anthology/2020.splu-1.7/">Paper</a>.</b><br><i><font size=2>Harsh Mehta, Yoav Artzi, Jason Baldridge, Eugene Ie and Piotr Mirowski</font></i></li>
</ol>
<h2>Accepted Non-archival Submissions</h2>
<ol start="8">
<li><b>SpaRTQA: A Textual Question Answering Benchmark for Spatial Reasoning. <!--<a href="https://slideslive.com/38940084">Talk</a>.--></b><br><i><font size=2>Roshanak Mirzaee, Hossein Rajaby Faghihi and Parisa Kordjamshidi</font></i></li>
<li><b>Geocoding with multi-level loss for spatial language representation. <!--<a href="https://slideslive.com/38940083">Talk</a>.--></b><br><i><font size=2>Sayali Kulkarni, Shailee Jain, Mohammad Javad Hosseini, Jason Baldridge, Eugene Ie and Li Zhang</font></i></li>
<li><b>Vision-and-Language Navigation by Reasoning over Spatial Configurations. <!--<a href="https://slideslive.com/38940085">Talk</a>.--></b><br><i><font size=2>Yue Zhang, Quan Guo and Parisa Kordjamshidi</font></i></li>
</ol>
<h2>Accepted Findings Submissions</h2>
<ol start="11">
<li><b>Language-Conditioned Feature Pyramids for Visual Selection Tasks. <a href="https://www.aclweb.org/anthology/2020.findings-emnlp.420/">Paper</a>.</b><br><i><font size=2>Taichi Iki and Akiko Aizawa</font></i></li>
<li><b>A Linguistic Analysis of Visually Grounded Dialogues Based on Spatial Expressions. <a href="https://www.aclweb.org/anthology/2020.findings-emnlp.67/">Paper</a>.</b><br><i><font size=2>Takuma Udagawa, Takato Yamazaki and Akiko Aizawa</font></i></li>
<li><b>Visually-Grounded Planning without Vision: Language Models Infer Detailed Plans from High-level Instructions. <a href="https://www.aclweb.org/anthology/2020.findings-emnlp.395/">Paper</a>.</b><br><i><font size=2>Peter A. Jansen</font></i></li>
<li><b>Decoding Language Spatial Relations to 2D Spatial Arrangements. <a href="https://www.aclweb.org/anthology/2020.findings-emnlp.408/">Paper</a>.</b><br><i><font size=2>Gorjan Radevski, Guillem Collell, Marie-Francine Moens and Tinne Tuytelaars</font></i></li>
<li><b>LiMiT: The Literal Motion in Text Dataset. <a href="https://www.aclweb.org/anthology/2020.findings-emnlp.88/">Paper</a>.</b><br><i><font size=2>Irene Manotas, Ngoc Phuoc An Vo and Vadim Sheinin</font></i></li>
<li><b>ARRAMON: A Joint Navigation-Assembly Instruction Interpretation Task in Dynamic Environmentsi. <a href="https://www.aclweb.org/anthology/2020.findings-emnlp.348/">Paper</a>.</b><br><i><font size=2>Hyounghun Kim, Abhay Zala, Graham Burri, Hao Tan and Mohit Bansal</font></i></li>
<li><b>Robust and Interpretable Grounding of Spatial References with Relation Networks. <a href="https://www.aclweb.org/anthology/2020.findings-emnlp.172/">Paper</a>.</b><br><i><font size=2>Tsung-Yen Yang, Andrew S. Lan and Karthik Narasimhan</font></i></li>
<li><b>RMM: A Recursive Mental Model for Dialogue Navigation. <a href="https://www.aclweb.org/anthology/2020.findings-emnlp.157/">Paper</a>.</b><br><i><font size=2>Homero Roman Roman, Yonatan Bisk, Jesse Thomason, Asli Celikyilmaz and Jianfeng Gao</font></i></li>
</ol>
<!--<a id="submission-info" class="anchor" href="#submission-info" aria-hidden="true"><span class="octicon octicon-link"></span></a>-->
<h2>Submission Procedure</h2>
We encourage contributions with technical papers (EMNLP style, 8 pages without references) or shorter papers on position statements describing previously unpublished work or demos (EMNLP style, 4 pages maximum). EMNLP Style files are available <a href="https://2020.emnlp.org/files/emnlp2020-templates.zip"> [Here]</a>. Please make submissions via Softconf <a href="https://www.softconf.com/emnlp2020/spatial-language/">[Here]</a>.
<br><br><p><b>Non-Archival option:</b> EMNLP workshops are traditionally archival. To allow dual submission of work to SpLU and other conferences/journals, we are also including a non-archival track. Space permitting, these submissions will still participate and present their work in the workshop, will be hosted on the workshop website, but will not be included in the official proceedings. Please submit through softconf but indicate that this is a cross submission at the bottom of the submission form: <br> <img src="images/submission.png" alt="Submission type"></p>
<a id="important-dates" class="anchor" href="#accepted-papers" aria-hidden="true"><span class="octicon octicon-link"></span></a><h2>Important Dates</h2>
<ul>
<li>Submission Deadline: <strike>August 15</strike> August 21, 2020</li>
<li>Notification: Oct 1, 2020</li>
<li>Camera Ready deadline: Oct 12, 2020</li>
<li>Workshop Day: November 19, 2020</li>
</ul>
<a id="organizers" class="anchor" href="#organizers" aria-hidden="true"><span class="octicon octicon-link"></span></a>
<h2>Organizing Committee</h2>
<table cellspacing="0" cellpadding="0" style="width:100%">
<tr>
<td><li><a href="http://www.cse.msu.edu/~kordjams/">Parisa Kordjamshidi</a></td>
<td>Michigan State University</td>
<td>[email protected]</td>
</tr>
<tr>
<td><li><a href="https://www.ihmc.us/groups/abhatia/">Archna Bhatia</a></li></td>
<td>Institute for Human and Machine Cognition</td>
<td>[email protected]</td>
</tr>
<tr>
<td><li><a href="https://alikhanimalihe.wixsite.com/mysite">Malihe Alikhani</a></li></td>
<td>University of Pittsburgh</td>
<td>[email protected]</td>
</tr>
<tr>
<td><li><a href="http://www.jasonbaldridge.com">Jason Baldridge</a></li></td>
<td>Google</td>
<td>[email protected]</td>
</tr>
<tr>
<td><li><a href="http://www.cs.unc.edu/~mbansal/">Mohit Bansal</a></li></td>
<td>UNC Chapel Hill</td>
<td>[email protected]</td>
</tr>
<tr>
<td><li><a href="https://people.cs.kuleuven.be/~sien.moens/"> Marie-Francine Moens</a> </li></td>
<td>KU Leuven</td>
<td>[email protected]</td>
</tr>
</table>
Contact: <a href="mailto:"[email protected]">[email protected]</a>
<a id="program-commitee" class="anchor" href="#program-commitee" aria-hidden="true"><span class="octicon octicon-link"></span></a>
<h2>Program Committee</h2>
<div>
<table cellspacing="0" cellpadding="0" float="left">
<tr><td><li>Steven Bethard</td><td> The University of Arizona</td></li></tr>
<tr><td><li>Raffaella Bernardi</td><td>University of Trento</td></li></tr>
<tr><td><li>Mehul Bhatt</td><td>Örebro University - CoDesign Lab</li></td></tr>
<tr><td><li>Yonatan Bisk</td> <td>Carnegie Mellon University</li></td></tr>
<tr><td><li>Johan Bos</td><td>University of Groningen</li></td></tr>
<tr><td><li>Asli Celikyilmaz</td><td>Microsoft Research</li></td></tr>
<tr><td><li>Joyce Chai</td><td>University of Michigan</li></td></tr>
<tr><td><li>Angel Xuan Chang</td><td>Simon Fraser University</li></td></tr>
<tr><td><li>Anthony Cohn</td><td>University of Leeds</li></td></tr>
<tr><td><li>Guillem Collell</td><td>KU Leuven</li></td></tr>
<tr><td><li>Simon Dobnik</td><td>University of Gothenburg</li></td></tr>
<tr><td><li>Bonnie J. Dorr</td><td>Institute for Human and Machine Cognition</li></td></tr>
<tr><td><li>Ekaterina Egorova</td><td>University of Zurich</li></td></tr>
<tr><td><li>Zoe Falomir</td><td>Universitat Bremen</li></td></tr>
<tr><td><li>Francis Ferraro</td><td>University of Maryland Baltimore</li></td></tr>
<tr><td><li>Lucian Galescu</td>,<td>Institute for Human and Machine Cognition</td></li></td></tr>
<tr><td><li>Mehdi Ghanimifard</td>,<td>University of Gothenburg</td></li></td></tr>
<tr><td><li>Julia Hockenmaier</td>,<td>University of Illinois at Urbana-Champaign</td></li></td></tr>
<tr><td><li>Lei Li</td><td>Bytedance</li></td></tr>
<tr><td><li>Bruno Martins</td><td>University of Lisbon</li></td></tr>
<tr><td><li>Srini Narayanan</td><td>Google Inc.</li></td></tr>
<tr><td><li>Mari Broman Olsen</td><td>Lionbridge AI</li></td></tr>
<tr><td><li>Martijn van Otterlo</td><td>Open University (The Netherlands)</li></td></tr>
<tr><td><li>Ian Perera</td>,<td>Institute for Human and Machine Cognition</td></li></td></tr>
<tr><td><li>Kirk Roberts</td><td>UT Health</li></td></tr>
<tr><td><li>Manolis Savva</td><td>Stanford University</li></td></tr>
<tr><td><li>Kristin Stock</td><td>Massey University</li></td></tr>
<tr><td><li>Jesse Thomason</td><td>University of Washington</li></td></tr>
<tr><td><li>Clare Voss</td><td>ARL</li></td></tr>
</table>
</div>
*If you are interested to join the program committee and participate in reviewing submissions please Email the organizers at <a href="mailto:"[email protected]">[email protected]</a>. Please mention your prior reviewing experience and a link to your publication records in your Email.
</section>
<!-- Start of StatCounter Code for Default Guide -->
<script type="text/javascript">
var sc_project=11083511;
var sc_invisible=1;
var sc_security="2f97c6cf";
var scJsHost = (("https:" == document.location.protocol) ?
"https://secure." : "http://www.");
document.write("<sc"+"ript type='text/javascript' src='" +
scJsHost+
"statcounter.com/counter/counter.js'></"+"script>");
</script>
<noscript><div class="statcounter"><a title="web analytics"
href="http://statcounter.com/" target="_blank"><img
class="statcounter"
src="//c.statcounter.com/11083511/0/2f97c6cf/1/" alt="web
analytics"></a></div></noscript>
<!-- End of StatCounter Code for Default Guide -->
</body>
</html>