Self taught Machine Learning Researcher

Описание к видео Self taught Machine Learning Researcher

Sara is research scholar at ‪@Google‬-Brain team working on building interpretable machine learning models for reliability and robustness. We talk about how she transitioned from economics to now pure research at the Brain team. We also talk in detail about what interpretability means, what are the state-of-art techniques and what are some of the most important things any machine learning researcher must know.

00:00 Introductions
01:30 Agenda for the podcast
02:15 A bit about what she's currently working on in the Brain team
04:20 How did you make the transition from economics to computer science?
09:25 How did you do that and what best practices worked best for you? Probably useful for people who want to break into ML on their own.
15:35 How did you apply for Google Residency and what does a resident typically work on?
20:10 What does a typical day look like at the desk of a researcher at Google Brain? What are your deliverables?
22:55 What drives the research projects at Google Brain team? Product requirements or exploration of research questions?
25:25 From a surface perspective, we know we want interpretable AI models but what is your take on why interpretability is not just a “desired” use-case but more of a “needed” use-case.
32:48 What according to you is a technical definition of “interpretability”, if any?
36:35 Does design of interpretable models vary from datasets to applications and interactions with the users?
38:43 How are the terms interpretability and explainability linked together? Are they different research topics or are they correlated?
41:34 Which techniques do you consider to be the state-of-art for interpreting DL models?
44:00 What do you think are the building blocks of an interpretable AI model? For CV, it's CNNs and transformers for NLP…?
46:55 Do you see interpretability of NNs as a bottleneck to use of ML models in practical use or just a feature to have?
48:45 How do you envision the design of interpretable AI models?
52:40 What would be an overarching advice from you to people who want to get started with ML on their own, somewhat like you? What would be common pitfalls and a word advice for them?

Sara's homepage: https://www.sarahooker.me/

Also check-out these talks on all available podcast platforms: https://jayshah.buzzsprout.com

About the Host:
Jay is a PhD student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.
Jay Shah:   / shahjay22  

You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.
Stay tuned for upcoming webinars!

**Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.**

Комментарии

Информация по комментариям в разработке