What do we parse when we parse?

26 Mar

Dear all,

Prof. Dr. Cem Bozşahin of METU Cognitive Science Department will be giving a talk on April 4th (Friday). Below you can find the title and the abstract of his talk. If you are planning to attend this talk, please send a reply to me as soon as possible with an indication of what would be a good time for you, so that I can arrange a suitable room and time for the talk:

E-mail address: aeminorhan at gmail com

“What do we parse when we parse?” by Prof. Dr. Cem Bozşahin, METU Cognitive Science Department

We parse strings of course, but what are in the strings? Words (known or unknown), affixes, clitics, tones, pitch contours, pitch accents, stress, syllables, interjections, noise and perhaps more. The amazing aspect is that parsing is almost like a reflex (try turning it off if you are a skeptic), and interpretation of the string (right or wrong) is immediate. Linguistic theories have to come to grips with these facts for their notion of computation to be acceptable by computationalist standards: fast, efficient, simple, effortless, robust, transparent and experience-driven processing. Computationalists on the other hand should realise that this is not possible without something turning the whole affair into a very guided search problem for the child, i.e. a bias, and linguists think that’s the principles of universal grammar.

In this talk, I will briefly describe what Combinatory Categorial Grammar (CCG) offers to conceive theory of competence and performance in a single package. Unlike multi-structural theories, and somewhat unexpectedly, it suggests that anything that bears on compositional semantics of strings can be reflected on lexicalized syntactic types as long as we conceive the types as things that are ultimately semantic in nature. It claims that human languages are type dependent, rather than structure dependent. Hence all these aspects are assumed to be diverse semantic constraints on a single dynamic aspect of grammar, its syntactic types. The theory suggests novel alternatives to perennial linguistic problems, such as unbounded extraction, crossing dependencies and flexible constituency, and to computational problems such as efficient parsing under ambiguity and learning. Its computational tractability and learnability make it a good candidate for wide-coverage parsing.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: