kamishima.net
Fairness-Aware Data Mining | Toshihiro Kamishima
http://www.kamishima.net/fadm
Soft & Data. Soft & Data. The goal of fairness-aware data mining is to analyze data while taking into account potential issues of fairness, discrimination, neutrality, and/or independence. Pedreschi, Ruggieri, and Turini in KDD2008. Firstly posed this problem, and a literature about this topic was emerged. General discussion about fairness-aware data mining. Future Directions of Fairness-aware Data Mining: Recommendation, Causality, and Theoretical Aspects ICMLW15. Links to Related Sites. We stated a fai...
vkrakovna.wordpress.com
Victoria Krakovna | Deep Safety
https://vkrakovna.wordpress.com/author/vkrakovna
Victoria's musings on AI safety, machine learning, and rationality. Author Archives: Victoria Krakovna. 2016-17 New Year review. Got a job at DeepMind as a research scientist in AI safety. Finished RNN interpretability paper. And presented at ICML and NIPS workshops. Attended the Deep Learning Summer School. Finished and defended PhD thesis. Moved to London and started working at DeepMind. And panel (moderator) at Effective Altruism Global X Boston. And panel at Brain Bar Budapest. Started contributing t...
vkrakovna.wordpress.com
AI Safety Highlights from NIPS 2016 | Deep Safety
https://vkrakovna.wordpress.com/2016/12/28/ai-safety-highlights-from-nips-2016
Victoria's musings on AI safety, machine learning, and rationality. AI Safety Highlights from NIPS 2016. This year’s Neural Information Processing Systems conference was larger than ever, with almost 6000 people attending, hosted in a huge convention center in Barcelona, Spain. The conference started off with two exciting announcements on open-sourcing collections of environments for training and testing general AI capabilities – the DeepMind Lab. And the OpenAI Universe. CIRL) by Hadfield-Menell, Russel...
approximatelycorrect.com
Machine Learning – Approximately Correct
http://www.approximatelycorrect.com/tag/machine-learning
Technical and Social Perspectives on Machine Learning Contact. Fake News Challenge – Revised and Revisited. The organizers of the The Fake News Challenge. Have subjected it to a significant overhaul. In this light, many of my criticisms of the challenge no longer apply. Last month, I posted a critical piece addressing the fake news challenge. 8220;Hillary Clinton eats babies”. My response criticized the challenge as both ill-specified. How do we know the supporting documents are legit? And body text (from.
approximatelycorrect.com
AI Safety Highlights from NIPS 2016 – Approximately Correct
http://www.approximatelycorrect.com/2016/12/28/ai-safety-highlights-from-nips-2016
Technical and Social Perspectives on Machine Learning Contact. AI Safety Highlights from NIPS 2016. This article is cross-posted from my blog. Thanks to Jan Leike, Zachary Lipton, and Janos Kramar for providing feedback on this post.]. And the OpenAI Universe. Among other things, this is promising for testing safety properties of ML algorithms. OpenAI has already used their Universe environment to give an entertaining and instructive demonstration of reward hacking. CIRL) by Hadfield-Menell, Russell, Abb...
approximatelycorrect.com
Policy Field Notes: NIPS Update – Approximately Correct
http://www.approximatelycorrect.com/2017/01/05/policy-field-notes-nips-update
Technical and Social Perspectives on Machine Learning Contact. Policy Field Notes: NIPS Update. Conversations about the social impact of AI often are very abstract, focusing on broad generalizations about technology rather than talking about the specific state of the research field. That makes it challenging to have a full conversation about what good public policy regarding AI would be like. In the interest of helping to bridge that gap, Jack Clark. Learning from Untrusted Data. Man is to Computer Progr...
approximatelycorrect.com
AI safety – Approximately Correct
http://www.approximatelycorrect.com/tag/ai-safety
Technical and Social Perspectives on Machine Learning Contact. AI Safety Highlights from NIPS 2016. This article is cross-posted from my blog. Thanks to Jan Leike, Zachary Lipton, and Janos Kramar for providing feedback on this post.]. And the OpenAI Universe. Among other things, this is promising for testing safety properties of ML algorithms. OpenAI has already used their Universe environment to give an entertaining and instructive demonstration of reward hacking. December 28, 2016. December 28, 2016.
approximatelycorrect.com
Conferences – Approximately Correct
http://www.approximatelycorrect.com/tag/conferences
Technical and Social Perspectives on Machine Learning Contact. Policy Field Notes: NIPS Update. Conversations about the social impact of AI often are very abstract, focusing on broad generalizations about technology rather than talking about the specific state of the research field. That makes it challenging to have a full conversation about what good public policy regarding AI would be like. In the interest of helping to bridge that gap, Jack Clark. January 5, 2017. January 5, 2017. Among other things, ...