Bookmarks for July 10th through July 11th

by danhon

This is an auto-posted collection of public links I’ve either posted to, or favourites from Twitter, my Instapaper bookmarks and my public links posted to for July 10th from 14:10 to 11:34:

  • The Families Who Use Slack, Asana, Trello, and Jira – The Atlantic
  • STS-6 ORBITER CHALLENGER ( IN FOG ) : NASA/Glenn Research Center : Free Download, Borrow, and Streaming : Internet Archive
  • Book: Because Internet – Gretchen McCulloch
  • INTERSPEECH 2009 Abstract: Schuller et al. – The last decade has seen a substantial body of literature on the recognition of emotion from speech. However, in comparison to related speech processing tasks such as Automatic Speech and Speaker Recognition, practically no standardised corpora and test-conditions exist to compare performances under exactly the same conditions. Instead a multiplicity of evaluation strategies employed ? such as cross-validation or percentage splits without proper instance definition ? prevents exact reproducibility. Further, in order to face more realistic scenarios, the community is in desperate need of more spontaneous and less prototypical data. This INTERSPEECH 2009 Emotion Challenge aims at bridging such gaps between excellent research on human emotion recognition from speech and low compatibility of results. The FAU Aibo Emotion Corpus [1] serves as basis with clearly defined test and training partitions incorporating speaker independence and different room acoustics as needed in most real-life settings. This paper introduces the challenge, the corpus, the features, and benchmark results of two popular approaches towards emotion recognition from speech.
  • Speech Emotion Recognition: Two Decades in a Nutshell, Benchmarks, and Ongoing Trends | May 2018 | Communications of the ACM – Communication with computing machinery has become increasingly 'chatty' these days: Alexa, Cortana, Siri, and many more dialogue systems have hit the consumer market on a broader basis than ever, but do any of them truly notice our emotions and react to them like a human conversational partner would? In fact, the discipline of automatically recognizing human emotion and affective states from speech, usually referred to as Speech Emotion Recognition or SER for short, has by now surpassed the "age of majority," celebrating the 22nd anniversary after the seminal work of Daellert et al. in 199610—arguably the first research paper on the topic.