Links Archive
(Links, Notes, Upcoming, OneTab, Starred, Add, Admin, Logout)
- update: my coworkers want me to turn down my raise [Askamanager]: NaN ('23 Dec 21Added Dec. 21, 2023, 3:46 a.m.in NaN | a)
- How did the U.S. achieve a soft landing? [Noahpinion.blog]: NaN ('23 Dec 21Added Dec. 21, 2023, 3:46 a.m.in economics | a)
- "I offended people at a staff meeting, desk mate makes sex noises while she works, and more" [Askamanager]: NaN ('23 Dec 21Added Dec. 21, 2023, 2:43 a.m.in NaN | a)
- Trump’s ‘Poisonous’ Campaign Rhetoric [The Dispatch]: NaN ('23 Dec 21Added Dec. 21, 2023, 2:43 a.m.in news | a)
- Six reasons chipmakers should put their fabs in Japan [Noahpinion.blog]: NaN ('23 Dec 21Added Dec. 21, 2023, 2:43 a.m.in NaN | a)
- update: I have to train an aggressive man when I have a trauma history [Askamanager]: NaN ('23 Dec 20Added Dec. 20, 2023, 9:57 a.m.in NaN | a)
- A new podcast! [Slowboring]: NaN ('23 Dec 20Added Dec. 20, 2023, 8:26 a.m.in NaN | a)
- I manage a habitual phone checker [Nan]: NaN ('23 Dec 20Added Dec. 20, 2023, 8:25 a.m.in NaN | a)
- Monthly Roundup #13: December 2023 [Nan]: NaN ('23 Dec 20Added Dec. 20, 2023, 8:25 a.m.in NaN | a)
- Discursive Warfare and Faction Formation [Nan]: NaN ('23 Dec 20Added Dec. 20, 2023, 8:25 a.m.in NaN | a)
- Ask me anything [Nan]: NaN ('23 Dec 20Added Dec. 20, 2023, 8:25 a.m.in NaN | a)
- Malthusian intuitions are destroying our politics [Nan]: NaN ('23 Dec 20Added Dec. 20, 2023, 8:25 a.m.in NaN | a)
- "Chording "The Next Right Thing" [Nan]: NaN ('23 Dec 20Added Dec. 20, 2023, 8:25 a.m.in NaN | a)
- Governance of General-Purpose AI Systems in the AI Act - The Future Society [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:49 a.m.in NaN | a)
- Experimentation Testing and Audit as a Cornerstone of EU AI Governance - The Future Society [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:49 a.m.in NaN | a)
- Enforcement of the AI Act - The Future Society [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:49 a.m.in NaN | a)
- Reflection Group on General-Purpose AI and Foundation Models - The Future Society [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:49 a.m.in NaN | a)
- Measurement and Benchmarking in the AI Act - The Future Society [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:49 a.m.in NaN | a)
- Regulatory Sandboxes in the AI Act - The Future Society [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:49 a.m.in NaN | a)
- List of Potential Clauses_Aug 2023 v. 0.1 [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:49 a.m.in NaN | a)
- Project proposal: Scenario analysis group for AI safety strategy — EA Forum [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:49 a.m.in NaN | a)
- Fish Welfare Initiative Strategy Update: Broadening Our Research Mandate in India — EA Forum [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:49 a.m.in NaN | a)
- Recommendations Biosecurity [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:49 a.m.in NaN | a)
- Empfehlungen KI - Übersicht 2023 en.docx [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:49 a.m.in NaN | a)
- https://republicans-science.house.gov/_cache/files/8/a/8a9f893d-858a-419f-9904-52163f22be71/191E586AF744B32E6831A248CD7F4D41.2023-12-14-aisi-scientific-merit-final-signed.pdf [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:42 a.m.in NaN | a)
- Airtable - [Shared] Attendee List [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:39 a.m.in NaN | a)
- Lewis Bollard: The state of animal welfare community needs and relationship to EA - Google Docs [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:39 a.m.in NaN | a)
- Notes on Oli's Memo Session - Google Docs [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:39 a.m.in NaN | a)
- MCF memo: Why EA? - Google Docs [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:39 a.m.in NaN | a)
- MCF Copy of Will MacAskill HNW fundraising anecdotes - Google Docs [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:39 a.m.in NaN | a)
- MCF Memo - The EA Forum is bad - Google Docs [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:39 a.m.in NaN | a)
- [Edited] Incentives PR and Naive Consequentialism - Google Docs [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:39 a.m.in NaN | a)
- What's going on in the AI advocacy space? - Google Docs [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:39 a.m.in NaN | a)
- Naive Consequentialism Discussion notes - Google Docs [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:39 a.m.in NaN | a)
- Generating next steps - Google Docs [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:39 a.m.in NaN | a)
- [MCF version] On reallocating resources from EA per se to specific fields - for MCFSept 2023 [6 pages] - Google Docs [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:39 a.m.in NaN | a)
- Israel Faces Renewed Calls for Hostage Negotiations [The Dispatch]: NaN ('23 Dec 19Added Dec. 19, 2023, 7:10 a.m.in news | a)
- update: my boss is blocking my move to a new team [Askamanager]: NaN ('23 Dec 19Added Dec. 19, 2023, 6:18 a.m.in NaN | a)
- vote for the worst boss of 2023: the finals [Askamanager]: NaN ('23 Dec 19Added Dec. 19, 2023, 6:18 a.m.in NaN | a)
- Midwinter mailbag [Slowboring]: NaN ('23 Dec 19Added Dec. 19, 2023, 6:18 a.m.in random | a)
- the best office holiday party date story of all time [Askamanager]: NaN ('23 Dec 19Added Dec. 19, 2023, 6:12 a.m.in random | a)
- Nikki Haley could be the new John McCain [Natesilver.net]: NaN ('23 Dec 19Added Dec. 19, 2023, 6:08 a.m.in politicalscience | a)
- Podcast: The productivity holiday gift guide.. continued! [Chrisbailey]: NaN ('23 Dec 19Added Dec. 19, 2023, 6:08 a.m.in NaN | a)
- Nations don't get rich by plundering other nations [Noahpinion.blog]: NaN ('23 Dec 19Added Dec. 19, 2023, 6:08 a.m.in economics | a)
- "updates: the awful workload, the extra sick days proposal, and more" [Askamanager]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:45 a.m.in NaN | a)
- should we hire a candidate who’s unhappy with the salary? [Askamanager]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:44 a.m.in management | a)
- Trump won't make lobster tails cheaper [Slowboring]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:44 a.m.in economics | a)
- #StopRansomware: Rhysida Ransomware [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- British Library - Wikiwand [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Bringing about animal-inclusive AI — EA Forum [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Chatbots deepfakes and voice clones: AI deception for sale [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Keep your AI claims in check [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Talos Institute [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Compute and other expenses for LLM alignment research [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- 10th edition of AI Safety Camp [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Support for Deep Coverage of China and AI [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- OpenAI Overhauls Content Moderation Efforts as Elections Loom — The Information [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- (2) Suggested Lists / X [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers - 80000 Hours [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Google Funds Artificial Intelligence Center at Civil Rights Group - Bloomberg [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- AFL-CIO and Microsoft announce new tech-labor partnership on AI and the future of the workforce - Stories [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- `pip install squigglepy` fails in Google Colab notebook · Issue #56 · rethinkpriorities/squigglepy [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- IAPS TOCs - Google Docs [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Bruxelles je t’aime - song and lyrics by Angèle [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- We asked Bard and ChatGPT the same questions. Here's what they said - AI Digest [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- The Art of Asking Questions—Asterisk [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- International survey of public opinion on AI safety - GOV.UK [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- AI presents growing risk to financial markets US regulator warns [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- News Publishers See Google’s AI Search Tool as a Traffic-Destroying Nightmare - WSJ [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- How Google Got Back on Its Feet in AI Race — The Information [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- OpenAI Investor Says We Shouldn't Worry Too Much About Sentient AI [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- TikTok Asks Advertisers to Spend 50% More Next Year — The Information [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- The People With Power at AI Pioneer Anthropic — The Information [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- [WIP] types by peterhurford · Pull Request #29 · rethinkpriorities/squigglepy [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Add API docs using Sphinx by michaeldickens · Pull Request #59 · rethinkpriorities/squigglepy [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- twitter.com/iandavidmoss/status/1736810576503091456 [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- twitter.com/daniel_271828/status/1736815138198761613 [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- "AK1089 ⏸️ on X: "@peterwildeford PredictIt is selling Trump and Biden yes shares at 41 and 39 respectively (your predictions: 47 and 46). interested to know if you're buying?" / X" [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- 8 Charts That Explain 2023 — The Information [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Why Amazon and Nvidia Are Teaming Up in the Cloud — The Information [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- New PayPal CEO Fast-Tracks Upgrades to Beat Back Competition — The Information [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Cooperative AI [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Fundraising action items - Google Docs [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Weekly Scorecard - Google Sheets [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- People we should maybe add to lt-discussion or our private LT newsletter - Google Docs [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Why Alibaba’s Cloud Ambitions Fell to Earth — The Information [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- To Continue Innovating OpenAI Should Return to Its Nonprofit Roots — The Information [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Notes - IAPS/XST transition check-in - Google Docs [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- "Thomas Wolf on X: "Some predictions for 2024 – keeping only the more controversial ones. You certainly saw the non-controversial ones (multimodality etc) already 1. At least 10 new unicorn companies building SOTA open foundation models in 2024 Stars are so aligned: - a smart small and dedicated…" / X" [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- EAGxLatinAmerica 2024 [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Effective Altruism Global [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- EA Global: London 2024 [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Hugging Face - Wikiwand [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- EAs interested in EU policy: Consider applying for the European Commission’s Blue Book Traineeship — EA Forum [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Mistral AI - Wikiwand [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- René Magritte - Wikiwand [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Patriot Act - Wikiwand [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- "roon on X: "the invention of certain new technologies grants you a temporary monopoly like discovering a trillion dollar gold mine. in the case of biotech it’s generally government enforced and in the case of internet giants it’s network effect. this is what Capital looks like" / X" [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- The NeurIPS 2023 Paper Awards honor researchers working on the LUMI supercomputer's massive AMD GPU resources - LUMI [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- "Ethan Mollick on X: "LLMs are good for many things but I think they are not ready yet for external facing sales and support roles. They are gullible & hallucinate. Here I interacted with a (pretty good!) GPT-4 powered bot for a Chevy dealership. I was still able to get it to give me bad pricing. https://t.co/ytQCxgMxgD" / X" [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- 2312.10029.pdf [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- "(1) Kevin Cornea on X: "@peterwildeford I assume you know but both PredictIt and Polymarket are significantly off relative to your predictions. Polymarket actually has deep liquidity with Biden nom and Trump nom less so for others" / X" [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- "Forecasting Research Institute on X: "Today we've released the XPT replication package. Access code and anonymized data for insights and replication here: https://t.co/chIzbdHjTz #XPT #OpenScience #DataRelease #ExistentialRisk" / X" [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- The 'Neglected Approaches' Approach: AE Studio's Alignment Agenda — LessWrong [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Apply Now: £250000 AI for Humanitarians Grant Fund - ICTworks [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- "Ethan Mollick on X: "The viral Chevy chatbot is one of many AI solutions using RAG (where AI answers are augmented by search results) that are coming to market. Does anyone know any that have openly published hallucination rates red team testing results & real world performance? Would love to see." / X" [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- "Jeffrey Ladish on X: "We talk a lot about AI capabilities in programming science (especially bio) general purpose reasoning etc. but I think one of the most underrated capabilities is "ability to feel like a person who empathizes/desires/feels stuff". I think we'll see this take off in 2024" / X" [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- "Jeffrey Ladish on X: "I think Nora is wrong about a bunch of stuff but I appreciate how she shows up in good faith and argues for stuff she believes" / X" [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- "Tanishq Mathew Abraham Ph.D. on X: "An In-depth Look at Gemini's Language Abilities abs: https://t.co/gBoaqgtXR6 Evaluates Gemini Pro GPT-3.5 Turbo GPT-4 Turbo and Mixtral over 10 datasets using exactly the same prompts and evaluation protocol for all evaluated models > In sum we found that across all… https://t.co/jX95nsjtyL" / X" [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- blog - 2023 - 12 - 06 - long list ai questions [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- https://twitter.com/InfoWars_tv/status/1736162434941964328 [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- Staff members’ personal donations for giving season 2023 [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 5:13 a.m.in NaN | a)
- The Biden Administration Invests in High-Speed Rail [The Dispatch]: NaN ('23 Dec 19Added Dec. 19, 2023, 4:51 a.m.in news | a)
- Apply For An ACX Grant (2024) [Astralcodexten]: NaN ('23 Dec 19Added Dec. 19, 2023, 4:19 a.m.in effectivealtruism | a)
- update: laser tag for team-building [Askamanager]: NaN ('23 Dec 19Added Dec. 19, 2023, 4:15 a.m.in NaN | a)
- Copyright law is living in the past [Slowboring]: NaN ('23 Dec 19Added Dec. 19, 2023, 4:15 a.m.in NaN | a)
- update: my coworker misinterprets all my facial expressions [Askamanager]: NaN ('23 Dec 19Added Dec. 19, 2023, 4:15 a.m.in NaN | a)
- update: a “thought experiment” is causing a cold war in my office [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 4:12 a.m.in NaN | a)
- the worst boss of 2023 is… [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 4:12 a.m.in NaN | a)
- Google Gemini and the future of large language models [Nan]: NaN ('23 Dec 19Added Dec. 19, 2023, 4:12 a.m.in NaN | a)
- Research into global priorities - Career review [80000hours]: NaN ('23 Dec 18Added Dec. 18, 2023, 10:06 a.m.in NaN | a)
- 80000 Hours staff picks: our favourite content of 2021 - 80000 Hours [80000hours]: NaN ('23 Dec 18Added Dec. 18, 2023, 10:06 a.m.in NaN | a)
- Founder of new projects tackling top problems - Career review [80000hours]: NaN ('23 Dec 18Added Dec. 18, 2023, 10:06 a.m.in NaN | a)
- Having a successful career with depression anxiety and imposter syndrome - 80000 Hours [80000hours]: NaN ('23 Dec 18Added Dec. 18, 2023, 10:04 a.m.in NaN | a)
- Long-term AI policy strategy research and implementation - Career review [80000hours]: NaN ('23 Dec 18Added Dec. 18, 2023, 10:04 a.m.in NaN | a)
- Job board - 80000 Hours [80000hours]: NaN ('23 Dec 18Added Dec. 18, 2023, 10:04 a.m.in NaN | a)
- https://cepisites.secure.force.com/careers/xcdrecruit__PositionDetails?id=a433z000002LhJYAA0&utm_campaign=80000+Hours+Job+Board&utm_source=80000+Hours+Job+Board [Cepisites.secure.force]: NaN ('23 Dec 18Added Dec. 18, 2023, 10:04 a.m.in NaN | a)
- Open Thread 307 [Astralcodexten]: NaN ('23 Dec 18Added Dec. 18, 2023, 7:47 a.m.in NaN | a)
- Marcus Qs - Google Docs [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- What I think the AI plan is - Google Docs [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- Estimating the cost curve for AIGS research - Google Docs [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- AI crunch time questions for Peter - Google Docs [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- What is crunch time? - Google Docs [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- What 2026 looks like by Daniel Kokotajlo annotated by Peter Wildeford - Google Docs [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- We can do better than argmax — EA Forum [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- "The "AI baseline scenario" is conjunctively unlikely - we need to be prepared for a bunch of AI scenarios - Google Docs" [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- Dank EA Memes [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- Dank EA Memes [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- 2023-05-25 Arb: generative bio - Google Docs [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- "(1) Daniel Eth (yes Eth is my actual last name) on X: "Lol 30 year shortening in 3 years https://t.co/qpvAUh5gGB" / X" [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- weak-to-strong-generalization.pdf [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- Our team - ICFG [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- Issue Brief Considerations for Governing Open Foundation Models [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- "AI Notkilleveryoneism Memes ⏸️ on X: "1) Character AI already has over 20 million people spending 2 HOURS A DAY talking to AIs (aka fake people) 2) Sama said AIs will soon be superhuman at persuasion 3) Those superhuman persuaders will soon outnumber us 10000 to 1. And be hot. An AI takeover scenario: You can’t…" / X" [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- "Stephen Casper @ NeurIPS on X: "This is very sad and was predicted by countless people. In about 18 months we went from the closed-source Dalle-2 which almost never produced anything NSFW to this. I think analogous slippery slopes will play out in the future. They will start when a “responsible” developer…" / X" [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- "Kat Woods on X: "Let's not be sexist here. Women can be manipulated by their AI boyfriends too. Imagine a guy who always has time to listen to you always empathizes with your problems adores you and treats you like a queen is interested in all of the same things you are and is constantly…" / X" [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- "Andrew on X: "Excited to announce v(1.0) of Digi the future of AI Romantic Companionship for IOS and Android 🤖 Site: https://t.co/q420GR4jJ4 Twitter: @digiaiapp A quick thread on features and where we go from here (1/13) https://t.co/9KZoorEoA0" / X" [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- "Siméon on X: "They have remained quiet but their influence has grown steadily. CharacterAI in 3 numbers: 1) 60% of users are 18 to 24 yo 2) Users spend an average of 2h/day on the platform 3) >20M monthly users It's just the beginning. Character's business model will push them to… https://t.co/vCKR4byZlK" / X" [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- "Séb Krier on X: "A few years ago I was discussing a lab research paper with a computer vision scientist who was skeptical of claims made. She (rightly) asked why she should trust the paper since there was no way to replicate it. That stuck with me: while not everything should necessarily be… https://t.co/dTodl4n9Ms" / X" [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- "Séb Krier on X: "Great @IST_org report nice to see a shift away from a binairy open/closed frame. For me this isn't really about Mixtral or Llama types; but with future larger frontier ones I can see a case for e.g. staggered releases and structured access mechanisms. https://t.co/86C5QwSBef https://t.co/fCcJHG9HVS" / X" [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- "Séb Krier on X: "Interesting study that seeks to unlearn a subset of the training data from Llama2-7b (Harry Potter books) without having to retrain it from scratch. Some limitations: 1. To detect specific anchored terms and devise generic counterparts the authors still needed to rely on… https://t.co/7dtahNlEug" / X" [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- "Perfectly Elodes on X: "real https://t.co/MNWiqlnLF2" / X" [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- The Biggest AI Policy Developments of 2023 [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- "Stefan Schubert on X: "Summary of Michael Beckley's Unrivaled: Why America Will Remain the World's Sole Superpower from five years ago. His analysis seems to have done well since. https://t.co/k8WAA7Xujt" / X" [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- 2022 (and All Time) Posts by Pingback Count — LessWrong [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- More information about the dangerous capability evaluations we did with GPT-4 and Claude. — LessWrong [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- "Alex Tabarrok 🛡️ on X: "Horrifying thought experiment." / X" [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- 2310.07923.pdf [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- "Limelihood ⏸️ Function on X: "This is fucked. Are there any systems that don't do this? Several but by far the best is highest median. In highest median voters score every candidate from 0 to 100 (or on some other scale). Then the candidate with the highest median score wins. That's it. It's that simple." / X" [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- "Haseeb >|< on X: "The Unreasonable Ineffectiveness of EAs I've been an Effective Altruist for over 10 years now. Watching EAs repeatedly faceplant in public has been tough. I still agree with the philosophy of Effective Altruism—my main problem with EA is not the ideas but the people. Not the…" / X" [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- Enhancing intelligence by banging your head on the wall — LessWrong [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- Bountied Rationality [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- Mapping the semantic void: Strange goings-on in GPT embedding spaces — LessWrong [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- Faunalytics' Plans & Priorities For 2024 - Faunalytics [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- (2) Federation of American Scientists🔬 (@FAScientists) / X [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- 2312.09323.pdf [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- "Matthew Barnett on X: "Despite critiquing individual models of AI risk I still have a substantial credence (>10%) in very bad outcomes. Why? Because some heuristic arguments for AI risk seem strong. In this thread I'll sketch one heuristic argument I find plausible." / X" [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- "Jeffrey Ladish on X: "I'm pretty sad about the state of AI discourse right now. I see a lot of movement from object-level discussions of risk to meta-level social discourse on who is talking about risks and why e.g. "the EAs are trying to do X" "the e/accs are trying to do Y". Overall this sucks..." / X" [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- "Isaac King 🔍 on X: "@MatthewJBar @ESYudkowsky https://t.co/XEF2XEnDRp" / X" [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 4:27 a.m.in NaN | a)
- how can I stop giving gifts this year if I’ve always given them previously? [Askamanager]: NaN ('23 Dec 18Added Dec. 18, 2023, 3:29 a.m.in NaN | a)
- update: I want my coworker to stop giving me “psychic messages” from my dead family members [Askamanager]: NaN ('23 Dec 18Added Dec. 18, 2023, 3:29 a.m.in NaN | a)
- "employee named his dog after his manager, coworker keeps cooking for me, and more" [Askamanager]: NaN ('23 Dec 18Added Dec. 18, 2023, 3:29 a.m.in NaN | a)
- Sunday thread [Slowboring]: NaN ('23 Dec 18Added Dec. 18, 2023, 3:29 a.m.in NaN | a)
- The Serendipity of Density [Nan]: NaN ('23 Dec 18Added Dec. 18, 2023, 3:28 a.m.in NaN | a)
- https://twitter.com/OwainEvans_UK/status/1490920941362487298 [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 3:11 p.m.in NaN | a)
- https://mobile.twitter.com/levelsio/status/1494255976215502848 [Mobile.twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 3:11 p.m.in NaN | a)
- "Benjamin Todd on Twitter: "The core idea of #effectivealtruism in a tweet: How much more effective are the best life saving charities compared to the average? Most people: 1.5x Global health experts: 100x Effective altruists: 100x https://t.co/CaLvctzkVj & the same applies within other causes https://t.co/4ufYqB1BXX" / Twitter" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 3:11 p.m.in NaN | a)
- https://twitter.com/DeepMind/status/1490730149314236417 [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 3:10 p.m.in NaN | a)
- "(1) Fermat's Library on Twitter: "In linguistics Escher sentences are sentences which initially seem acceptable but upon further reflection have no well-formed meaning. https://t.co/uCOOwD4Er6" / Twitter" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 3:09 p.m.in NaN | a)
- "Jason Collins on Twitter: "New post: Replicating scarcity https://t.co/o1BuPKYAoZ" / Twitter" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 3:09 p.m.in NaN | a)
- "Alex Beal 🆎 on Twitter: "This episode is about the importance of zone 2 exercise (low intensity). Different levels of intensity exercise different metabolic pathways and zone 2 exercises a pathway that has been implicated in metabolic disorder. I've added 2 hr/week to my routine. https://t.co/jAHviQSjQg" / Twitter" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 3:08 p.m.in NaN | a)
- "@levelsio on Twitter: "Mine: - work less and go on dates and see fam/friends more (move back to PT soon) - focus on deadlift and squat in gym (this year did bench a lot) - grow https://t.co/PL9rryfQ7X and expand into Spain Thailand Dubai and Bali - launch one new fun project (not in remote work)" / Twitter" [Mobile.twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 3:08 p.m.in NaN | a)
- "Internal Tech Emails on Twitter: "Mark Zuckerberg: "speed and strategy" February 14 2008 https://t.co/zx6m54tWr6" / Twitter" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 3:08 p.m.in NaN | a)
- twitter.com/RishiBommasani/status/1734962975146979788 [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 12:48 p.m.in NaN | a)
- "(1) Jeremy Neufeld on X: "The House Select Committee on China just released its blockbuster report on US-China economic competition Key finding: the PRC is gaining on the United States in the race for global talent https://t.co/dKzuU1OZ91" / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 12:48 p.m.in NaN | a)
- "Marius Hobbhahn on X: "We took a careful look at trends in ML hardware. Main finding: lower-precision formats like FP16 & INT8 combined with specialized tensor cores increase computational performance by up to 10x on average. NVIDIA's H100 sees 30x speedup with INT8 vs FP32. 1/ https://t.co/8vkHk0Iyrj" / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 10:33 a.m.in NaN | a)
- "Shaun K.E. Ee on X: "[1/15] How can we adapt cybersecurity frameworks to address frontier AI risks? My colleagues at @_IAPS_ and I explore this question in our new “defense-in-depth” paper and provide recs for the Frontier Model Forum @NIST @CISAgov @MITREcorp and others: t.co/ROUiEd8JI8 t.co/Bwi3FcfiFa" / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 10:32 a.m.in NaN | a)
- "(1) Vincent Weisser 📍SF on X: "Levels of AGI: Operationalizing Progress on the Path to AGI" by @GoogleDeepMind team Most definitions focus on capabilities rather than processes. Achieving AGI does not require human-like thinking consciousness or brain-like mechanisms. The focus should be on what an AGI… https://t.co/3MCwZrIkvn" / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 10:31 a.m.in NaN | a)
- "xuan (ɕɥɛn / sh-yen) on X: "Sorry to be kinda annoying about this! But consider: "Humans won't be able to supervise compilers smarter than us. For example if a superhuman compiler generates a million lines of extremely complicated assembly we won't be able to tell if it's safe to run or not." / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 10:13 a.m.in NaN | a)
- "Rob Bensinger ⏹️ on X: "@balajis Great to know that we found a crux! A few years ago I think I'd have made all the same arguments as you. Like see this thread: https://t.co/om8iysinnh From my perspective governments are an incredibly dangerous and unreliable way to try to address existential risk from AI. https://t.co/wwm6ImPLlF" / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 10:12 a.m.in NaN | a)
- "AI Notkilleveryoneism Memes ⏸️ on X: "P(doom) roundup: what probability do people put on AI killing everyone? - Vitalik Buterin (Ethereum): 10% - Zvi Mowshowitz: 60% - Elon Musk: 20-30% - Scott Alexander: 20-25% - Dario Amodei (CEO Anthropic): 10-25% - Jan Leike (Head of Alignment OpenAI): 10-90% - Geoffrey Hinton… https://t.co/5RQDwlx5Ga" / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 10:12 a.m.in NaN | a)
- "Jaime Sevilla on X: "Great work from my colleague @pvllss in collaboration with Tom Davidson from @open_phil JS Denain from @UCBerkeley and Guillem Bas from @RiesgosGlobales" / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 10:12 a.m.in NaN | a)
- "David Johnston on X: "Here’s why I think advanced AI will not be an unstable beast that constantly liable to kill you at the slightest misstep: the argument for instability says that goals are unidentifiable from the given data and unless it picks the right goal you’re screwed." / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 10:12 a.m.in NaN | a)
- "Seán Ó hÉigeartaigh on X: "Also: should be a high priority target for funding in my view." / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 10:12 a.m.in NaN | a)
- "(1) Cate Hall on X: "What is the best argument that transformative AI is >10 years away?" / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 10:12 a.m.in NaN | a)
- twitter.com/TheZvi/status/1734924429321187629 [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 10:12 a.m.in NaN | a)
- twitter.com/hokiepoke1/status/1734884377870442976 [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 10:12 a.m.in NaN | a)
- twitter.com/Jsevillamol/status/1734961836645323149 [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 10:12 a.m.in NaN | a)
- "(2) Marius Hobbhahn on X: "TLDR: the H100 lower precision and other advances lead to a big jump in computational performance. We're in for a wild ride when the next generation of models is trained on 100x more compute in 2024 and 2025." / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 10:12 a.m.in NaN | a)
- "Tetraspace 💎🔎💖 on X: "@thiagovscoelho Physically impossible: ~existence proof humans By year 2035: transformers seem surprisingly stuck at humanish levels By year 2100: but not that stuck Can disempower in 10 years: yea 10 years is quite a while. von Neumann infamously noticed the US had a 4 year lead on nukes https://t.co/YUZxFBenyD" / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 10:11 a.m.in NaN | a)
- "Max Reddel on X: "This is a great tool to have more informed debates on AI existential risk. My views are below. And here is the link to do it yourself: https://t.co/nnBFpcDAhE https://t.co/tDnY7HQXWt" / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 10:11 a.m.in NaN | a)
- "Ada Lovelace Institute on X: "Medical regulators have long applied rigorous processes to new technologies that alongside possible benefits could present risks for people and society. Our new paper explores lessons FDA oversight can provide for AI foundation model governance: https://t.co/jT7yYqtTYC" / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:38 a.m.in NaN | a)
- "PauseAI ⏸ on X: "The Pope says AI is "perhaps the highest-stake gamble of our future" and calls for an international treaty. He's right. A treaty is exactly what needs to happen. Self-regulation by companies will not be enough companies will always have strong incentives to race ahead.…" / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:38 a.m.in NaN | a)
- "Liv Boeree on X: "I suspect a large part of why so many in Silicon Valley take this unfair view is they just can’t imagine why a sufficiently smart & technically capable person would *actively choose* the non-profit/low-earnings route to solving a problem and thus conclude the only explanation…" / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:38 a.m.in NaN | a)
- Upcoming Links [Guarded-everglades-89687.herokuapp]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:37 a.m.in NaN | a)
- "Matthew Barnett on X: "@daniel_271828 @Jsevillamol Off the top of my head I'd say about 10^28-10^32 FLOP." / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:37 a.m.in NaN | a)
- "Matt Fuller on X: "A pretty damning situation here: Rep. Mike Garcia (R-CA) sold up to $50000 of Boeing stock just weeks before a committee he's on released a report on Boeing 737 crashes. He then blew the deadline and didn't disclose the trades until after Election Day. https://t.co/CKyTG52Vur" / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:37 a.m.in NaN | a)
- "Pablo Villalobos on X: "AI capabilities can be significantly improved without expensive retraining: our latest paper explores post-training enhancements for LLMs which we categorize into five areas: Tool Use Prompting Scaffolding Solution Choice and Data https://t.co/1SLTgktTo2" / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:36 a.m.in NaN | a)
- "Henry Shevlin on X: "Quick lesson in the dangers of data contamination. Years ago I came up with an acronym for remembering the periods of the Paleozoic era — “Catastrophic Overthrow Started Different Colder Period”. I was curious if ChatGPT could guess what it stood for. 1/4 https://t.co/Ddh3jwDfq4" / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:36 a.m.in NaN | a)
- "Andrew Strait on X: "Hot christ🇻🇦 its been less than 24 hours⏲️ since Gemini Ultra dropped and I am having hourly panic attacks 🥴 about the sheer insAInity of what this model can do🤯 Here are my top 10 insane uses of Gemini I have personally witnessed change the world forever 🧵 (1/11)" / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:36 a.m.in NaN | a)
- "Holly ⏸️ Elmore on X: "Excellent post! imo a lot of us in AI Safety have typical mind fallacy for how much other people would stay interested in scaling ML models if the hype died down. Not everyone is convinced of the power of AGI and this approach from first principles. https://t.co/Oct44G58sK" / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:36 a.m.in NaN | a)
- "Mira Murati on X: "Exploring generalization properties of deep learning to control strong models with weak supervisors showing early promise." / X" [Twitter]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:35 a.m.in NaN | a)
- Copy of Workshop_shared_ToC [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- Biden's NTIA wades into open source AI controversy [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- 2024 -26 AW Department Strategic Plan [internal] - Google Docs [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- (28) Post [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- I'm Free. But The Fight Has Just Begun. - by Wayne Hsiung [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- Rize · Maximize Your Productivity [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- Be A Dad - by Robin Hanson - Overcoming Bias [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- Having Kids [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- The Self-Immolation and Protest of Thích Quảng Đức [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- Why the Godfather of A.I. Fears What He’s Built [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- Is Argentina the First A.I. Election? - The New York Times [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- EU policymakers reach an agreement on the AI Act — EA Forum [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- reset-prevent-build-scc-report.pdf [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- This A.I. Subculture’s Motto: Go Go Go - The New York Times [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- Omidyar Network creates $30 million fund to boost AI diversity [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- Meet the Lawyer Leading the Human Resistance Against AI [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- Akin Intelligence - August 2023(1) [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- Couch to 5k - C25K Running Program [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- OP Grants - Google Sheets [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- Longview Fund naming report - Google Docs [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- "James Pethokoukis ⏩️⤴️ on X: "Google DeepMind used a large language model to solve an unsolvable math problem https://t.co/zvEtTcuyRE https://t.co/RA2Fz5bJqr" / X" [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- Superalignment Fast Grants [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- "Stephen Clare on X: "I think it will be helpful to talk less about regulating AI and more about specific policy goals. Specific things like risk assessments and reporting external evals and explainability requirements just sound sensible whereas "regulation" sounds hamfisted and shady" / X" [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- "Google AI on X: "Introducing StyleDrop a model that allows a significantly higher level of stylized text-to-image synthesis by using a few style reference images that describe the style for text-to-image generation bypassing the burden of text prompt engineering. More→ https://t.co/F3Rw3QlbtP https://t.co/2J4wljmFwF" / X" [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- "Séb Krier on X: "lmao unbelievable https://t.co/LHoG0K71Gd https://t.co/xtzHYM5VVD" / X" [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- Some for-profit AI alignment org ideas — LessWrong [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- GWWC is funding constrained (and prefers broad-base support) — EA Forum [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible — LessWrong [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- "AI Alignment" is a Dangerously Overloaded Term — LessWrong" [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- Current AIs Provide Nearly No Data Relevant to AGI Alignment — LessWrong [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- OP RFP - Google Docs [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- [JueYan Zhang] The upcoming AI safety funding landscape v2 - Google Docs [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- GHD fundraising strategy - Google Docs [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- Brendan Bordelon (@BrendanBordelon) / X [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- "Tamay Besiroglu on X: "Pleased about this work. We wanted to know how much compute is possible with current tech and derived some bounds. Result: using the world's current energy consumption and maximally efficient GPUs yields 1e35 FP16 ± 0.7 OOMs about 10B-fold more than GPT-4." / X" [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- "Jaime Sevilla on X: "First Riesgos Globales has advised the Spanish presidency of the EU council on regulation foundation models. It's hard to understand the counterfactual impact but all our major recommendations were adopted in the EU AI Act. https://t.co/V7d2jyVQfn" / X" [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- "Aaron Bergman 🔍 ⏸️ (in that order) on X: "Huh yeah I bet this could get fixed with a built in result checking thing a bit like how code interpreter tries again after encountering errors given that it can accurately count its own inaccurate generations (each apple pic generated as a request for 6) https://t.co/HuPf6mOSFp" / X" [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- twitter.com/alexeheath/status/1735805122104680675 [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- 2023+2024 Funding by Cause Area and Donor - Google Sheets [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 9:01 a.m.in NaN | a)
- [draft post] Scenario analysis group for AI safety strategy [Docs.google]: NaN ('23 Dec 16Added Dec. 16, 2023, 8:41 a.m.in NaN | a)
- https://forum.effectivealtruism.org/posts/btTeBHKGkmRyD5sFK/open-phil-should-allocate-most-neartermist-funding-to-animal?commentId=nX8Wn5mHhiNgJABum#nX8Wn5mHhiNgJABum [EA Forum]: NaN ('23 Dec 16Added Dec. 16, 2023, 8:40 a.m.in NaN | a)
- vote for the worst boss of 2023: round 2 [Askamanager]: NaN ('23 Dec 16Added Dec. 16, 2023, 8:03 a.m.in NaN | a)
- Friday good news … this time with updates [Askamanager]: NaN ('23 Dec 16Added Dec. 16, 2023, 8:03 a.m.in NaN | a)
- Breaking Racial Polarization: A Case Study In The Deep South [Split-ticket]: NaN ('23 Dec 16Added Dec. 16, 2023, 8:03 a.m.in NaN | a)
- Friday thread [Slowboring]: NaN ('23 Dec 16Added Dec. 16, 2023, 8:03 a.m.in NaN | a)
- "weekend open thread – December 16-17, 2023" [Nan]: NaN ('23 Dec 16Added Dec. 16, 2023, 8:03 a.m.in NaN | a)
- Thursday thread [Slowboring]: NaN ('23 Dec 15Added Dec. 15, 2023, 8:54 a.m.in NaN | a)
- To Pay or Not To Pay … College Athletes [The Dispatch]: NaN ('23 Dec 15Added Dec. 15, 2023, 8:53 a.m.in news | a)
- update: I dated someone who was using me to get back at his ex-wife … who turned out to be my boss [Askamanager]: NaN ('23 Dec 15Added Dec. 15, 2023, 8:53 a.m.in NaN | a)
- update: an acquaintance I recommended proselytized to all my clients (with singing) [Askamanager]: NaN ('23 Dec 15Added Dec. 15, 2023, 8:53 a.m.in NaN | a)
- "updates: I’m in trouble for occasionally arriving a few minutes late, and more" [Askamanager]: NaN ('23 Dec 15Added Dec. 15, 2023, 8:53 a.m.in NaN | a)
- Zelensky Delivers his Wish List to Washington [The Dispatch]: NaN ('23 Dec 15Added Dec. 15, 2023, 8:50 a.m.in news | a)
- Welfare considerations for farmed shrimp — EA Forum [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- An analysis of US AI chip export controls v2 - Google Docs [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- How Europe Is Reshaping Amazon — The Information [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- "OpenAI on X: "We're announcing together with @ericschmidt: Superalignment Fast Grants. $10M in grants for technical research on aligning superhuman AI systems including weak-to-strong generalization interpretability scalable oversight and more. Apply by Feb 18! https://t.co/eCKwZWLSZE" / X" [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- CNAS Launches New Effort on Indo-Pacific Cybersecurity with Ambassador Nathanial C. Fick [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- (3) ML Safety Newsletter #11 [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- "Risto Uuk on X: "I just listened to this podcast episode by POLITICO Tech where the interviewed expert said that in the case of AI existential risks we don’t have empirical evidence nor rigorous mechanisms for assessing these claims: https://t.co/YgSwOdSX2I. I also listened to this episode…" / X" [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- "davidad 🎇 on X: "My top-level view about most questions regarding powerful AI is that if you’re *very confident* about anything you’re probably wrong to be so confident. https://t.co/v5KPNjKMuq" / X" [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- "The Centre for Long-Term Resilience on X: "The Implementation Update to the UK Government’s Resilience Framework marks a significant step forward. At the same time much work still needs to be done. In the blog post below we identify highlights from the update as well as key areas for improvement. https://t.co/rS6HnFDrDf" / X" [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- "Future of Life Institute on X: "The Pentagon’s Rush to Deploy AI-Enabled Weapons Is Going to Kill Us All" New in @TheNation Michael T. Klare (@mklare1) argues exactly that. "Despite warnings from scientists and diplomats that the safety of these programs cannot be assured and that their misuse could have… https://t.co/nie7qXgFFo" / X" [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- A year of wins for farmed animals - by Lewis Bollard [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- "Alexander Berger on X: "Great newsletter from @Lewis_Bollard on 10 wins for farm animal welfare during a bleak year for the cause globally: https://t.co/hXWP5LAQCI" / X" [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- "Juan Mateos Garcia on X: "Can safety-minded AI companies deliver safe AI in a competitive market? This economic model suggests this might be hard without regulations that force everyone to internalise harms from unsafe AGI. https://t.co/jmNjCMd2x7 https://t.co/WfJ480YwXe" / X" [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- "Pushmeet Kohli on X: "Can LLMs uncover new knowledge - the scientific equivalent of AlphaGo’s move 37? In a Nature paper today we @GoogleDeepMind unveil that FunSearch our new LLM based approach for program search has uncovered new results in Maths and Computing. See details below." / X" [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- "Google DeepMind on X: "Introducing FunSearch in @Nature: a method using large language models to search for new solutions in mathematics & computer science. 🔍 It pairs the creativity of an LLM with an automated evaluator to guard against hallucinations and incorrect ideas. 🧵 https://t.co/MC5ttgvZeM https://t.co/npxymdRxFo" / X" [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- "Theo Sanderson on X: "@alistairmcleay @Google @OpenAI @GoogleDeepMind the one I tried doesn't seem to work for any alterations to the image fwiw. still a good model but hard to evaluate with images from the web where it's seen the answer in text https://t.co/fHxTy0Uw63" / X" [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- "Richard Ngo (at NeurIPS) on X: "Amazing story. But also seems like a massive coordination failure. There aren’t that many math PhDs at UCLA; each of them should easily be able to snag a few hours of Terry’s time in the 4+ years they’re working on their thesis." / X" [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- Jake Sullivan (@JakeSullivan46) / X [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- Zoe - 2023 Oct/Nov (Long) Performance Evaluation Form - Google Docs [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- PW Copy of IAPS OKR tracking 2023 - Google Docs [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- [WIP] Late-stage IR - Google Docs [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- Pope Calls for Treaty to Regulate AI [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- "Ben Landau-Taylor on X: "Many people seem to think that in high-stakes situations you’re supposed to cast aside your morals to grab power and… I don’t know man. I genuinely don’t get it. To me it seems almost axiomatic that high-stakes situations are when holding onto your morals is *most* important.…" / X" [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- Looking Back on the Big Policy Stories of 2023 [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- RAND — Making a Difference [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- "Ethan Mollick on X: "For any complex topic the declining value of Google compared to AI-assisted search engines is getting clear. Despite small issues Bard Bing & Perplexity do a much better job. I would also suspect that they are often less error-prone than "doing your own research" with Google. https://t.co/7z9XZ9QADi" / X" [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- "Nora Belrose on X: "Interpretability research requires open source AI. Closed source models are black boxes." / X" [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- "Tyler Alterman (in monk mode: writing month) on X: "IMO if safetyists want to influence culture they’d do well to adopt an inspiring positive vision like “hum/acc” who (eg) want to create enlightened mentats & Bene Gesserit vs a negative reactive stance like “stop AGI before doom.” Ppl need something to believe in" / X" [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- Congress approves bill barring any president from unilaterally withdrawing from NATO [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- EU policymakers reach an agreement on the AI Act — LessWrong [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- Four Thousand Weeks - 10 Practical Tools to Help Embrace Your Finitude - To Summarise [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- Announcing Surveys on Community Health Causes and Harassment — EA Forum [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- EU Member States Policy & Partnerships Lead Global Affairs [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 7:20 a.m.in NaN | a)
- Some thoughts on where the war in Ukraine is headed [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 6:22 a.m.in NaN | a)
- "acupuncture as a team-building activity, coworker turns down new work but isn’t doing much, and more" [Nan]: NaN ('23 Dec 15Added Dec. 15, 2023, 6:22 a.m.in NaN | a)
- "Tessa Alexanian on X: "Out today! @FAScientists Bio x AI policy sprint recommendations: Screening automated labs - me BDT risk assessment - @RMoulange & @SophieMRose_ AI-Bio self-governance - @OllyMCrook Safe science compute cloud - @samuelmcurtis BDT x synthesis screening collabs - @ShresRath" / X" [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- "Elizabeth Van Nostrand on X: "Hat tip to @stuartbuck1's post for introducing me to Brian Nosek's pyramid of social change https://t.co/4u2jQmdtiR https://t.co/Eh5yWoCs9e" / X" [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- "(1) Aidan McLau on X: "The irony of Mistral reigniting the “private companies have no edge” debate is Mistral’s success had nothing to do with open-sourcing their models. Mistral was trained in the dark. Nobody knows their methodology or dataset. Obviously Mistral wants to keep it that way to… https://t.co/h3Und67DKl" / X" [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- "Tristan Cunha on X: "@TylerAlterman Here's the paper: https://t.co/PnVee7rTNC It seems like the poison attack actually works better against the better models they tested. Maybe because bigger models are better at extracting data from training images? But it's not clear that this is a generalizable result. The…" / X" [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- "Zachary Nado on X: "@suchenzang we're in the $2 Uber rides phase of the AI tech cycle" / X" [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- "Sasha Rush on X: "For various reason several niche topics that I cover on this account are having a moment. Here's ones I have videos for if you want to follow along: Data-Constraints - Sometimes it makes sense to train on your data multiple times. https://t.co/AK3LKbOMZX" / X" [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- "FAR AI on X: "🌟🌐🤔#NeurIPS2023 Spotlight Poster: Unravel the mystery of AI morality! Don’t miss our session on "Evaluating Moral Beliefs in LLMs" on Dec 13 10:45 AM CST poster #1523. Insights from a study on 28 #LLMs by @ninoscherrer @causalclaudia & team." / X" [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- "Jeremy @ NeurIPS on X: "Today @ericries (creator of Lean Startup & LTSE) & I (https://t.co/GEOZunWoXj /Kaggle) are launching a new kind of R&D lab: https://t.co/AvAXMrebTd. We're backed by $10m of funding from @DecibelVC. For-profit R&D labs are rare today but have an amazing history... 🧵" / X" [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- George McGowan (@GjMcGowan) / X [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- (4) Dan Williams (@danwilliamsphil) / X [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- "Matt Sheehan on X: "Great chatting w/ @HorizonIPS about how people can make a career in AI policy. There are a million paths into this field but I do think writing well is one of the most useful skills. Below I tried to give the kernel of my thinking on how to do that: https://t.co/HXugkZ1BOX https://t.co/q6FkDHZI2p" / X" [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- "Markus Anderljung on X: "Are Emergent Abilities of Large Language Models a Mirage?" is a great paper. However I think people often draw too strong conclusions from it. Sometimes people seem to say it disproves that it's hard to predict model capabilities. That doesn't seem right to me." / X" [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- VC Firm OpenView Abruptly Winds Down After Key Partners Leave Returns Sour — The Information [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- Some AI Startups Find the Money’s No Longer So Easy — The Information [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- ZW Thoughts - Responsibility Splits - Oct 2023 - Google Docs [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- Nonlinear’s Evidence: Debunking False and Misleading Claims — EA Forum [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- Nonlinear’s Evidence: Debunking False and Misleading Claims — LessWrong [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- A Model Estimating the Value of Research Influencing Funders — EA Forum [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- Auction — PETER SINGER [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- Dan Schwarz / X [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- Kat Woods - The evidence is public! 🥳 Recently Ben Pace wrote a... [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- What are the biggest conceivable wins for animal welfare by 2025? — EA Forum [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- Kat Woods - 𝗔𝗿𝗼𝘂𝗻𝗱 𝟳𝟓% 𝗼𝗳 𝗽𝗲𝗼𝗽𝗹𝗲 𝗰𝗵𝗮𝗻𝗴𝗲𝗱 𝘁𝗵𝗲𝗶𝗿 𝗺𝗶𝗻𝗱𝘀 𝗯𝗮𝘀𝗲𝗱 𝗼𝗻 𝘁𝗵𝗲... [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- Vaccine delivery: Timelines and drivers of delay in low- and middle-income countries — EA Forum [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- twitter.com/StefanFSchubert/status/1734930545656672723 [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- (3) X [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- "(3) Santi Ruiz on X: "Challenge prizes" rewards for new inventions helped develop canning space flight and longitude. But it's hard to get government agencies to offer them. At Statecraft we talked to maybe the world's foremost expert on exactly this. https://t.co/r45rxBedLi" / X" [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- (3) The Best of Don't Worry About the Vase - by Zvi Mowshowitz [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- twitter.com/random_walker/status/1734972700668702887 [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- twitter.com/Simeon_Cps/status/1735004953783976253 [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- twitter.com/robbensinger/status/1735022846005629186 [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- "Balaji on X: "ANALYSIS OF COMPETING HYPOTHESES I appreciate this post by @robbensinger because it reminds me of a CIA technique called the "Analysis of Competing Hypotheses." The concept is documented at length in Chapter 8 of Heuer's 1999 book on the Psychology of Intelligence Analysis[1].… https://t.co/kTiKtsniFV" / X" [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- Copy of OPR Outputs - Google Docs [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- "Oliver Guest on X: "If governments international orgs or foundations want to support AI alignment to reduce large-scale AI risks what should they do? This is the topic of a new paper from me @michael__aird and @S_OhEigeartaigh. https://t.co/bGjuuCTSnG" / X" [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- Faunalytics’ Plans & Priorities For 2024 — EA Forum [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- "Buck Shlegeris on X: "New paper! We design and test safety techniques that prevent models from causing bad outcomes even if the models collude to subvert them. We think that this approach is the most promising available strategy for minimizing risk from deceptively aligned models. 🧵 https://t.co/u3cimZptUT" / X" [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- Nonlinear’s Evidence: Debunking False and Misleading Claims — EA Forum [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- AMA: Founder and CEO of the Against Malaria Foundation Rob Mather — EA Forum [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- Underpromise overdeliver — EA Forum [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- "Institute for Security and Technology on X: "Advanced AI systems are proliferating at an astonishing rate with varying levels of access to their model components. To date there is no clear method for understanding the risks that can arise as access increases. Our latest report addresses this gap: https://t.co/uoZ68tMsJP https://t.co/yNJSfAsPKx" / X" [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- "Tamay Besiroglu on X: "Bridgewater Associates on explosive growth from AI: "Given these considerations full-blown explosive growth looks unlikely. But we wouldn’t rule it out at this early stage..." https://t.co/0ZZKdpGyhK" / X" [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- Moral Reality Check (a short story) — LessWrong [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- Andreessen Horowitz Plots Infrastructure American Dynamism Funds — The Information [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- Safeguarding the Safeguards [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- AI capabilities can be significantly improved without expensive retraining – Epoch [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- [2312.04616] Can apparent bystanders distinctively shape an outcome? Global south countries and global catastrophic risk-focused governance of artificial intelligence [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- How Google Got Back on Its Feet in AI Race — The Information [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- Observatorio de Riesgos Catastróficos Globales (ORCG) Recap 2023 — EA Forum [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- Rethink Priorities needs your support. Here's what we'd do with it. — EA Forum [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- Thune Klobuchar release bipartisan AI bill [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- TAIGA workshop [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 11:07 a.m.in NaN | a)
- Wednesday thread [Slowboring]: NaN ('23 Dec 14Added Dec. 14, 2023, 10:12 a.m.in NaN | a)
- Chapter 2 of Ansi Common Lisp [Paulgraham]: NaN ('23 Dec 14Added Dec. 14, 2023, 10:12 a.m.in NaN | a)
- "#1 Bismarck: the ultimate practical education in the "unrecognised simplicities" of high performance politics/government " [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 10:09 a.m.in NaN | a)
- Traveling for abortions: The untold story [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 10:09 a.m.in NaN | a)
- AI #42: The Wrong Answer [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 10:09 a.m.in NaN | a)
- office holiday gift-giving stories: worst gifts and weirdest gifts [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 10:09 a.m.in NaN | a)
- "update: when my boss wants me to do something I really don’t want to do, can I just … not?" [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 6:12 a.m.in NaN | a)
- Are There Examples of Overhang for Other Technologies? [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 6:12 a.m.in NaN | a)
- update: my boss is pressuring me to be more “visible” [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 6:12 a.m.in NaN | a)
- how do I tell an employee he isn’t welcome at our holiday party? [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 6:12 a.m.in NaN | a)
- Chapter 1 of Ansi Common Lisp [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 6:12 a.m.in NaN | a)
- Is it time for a second look at Kamala Harris? [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 6:12 a.m.in NaN | a)
- "letting a man open a door at an interview, Icy Hot at work, and more" [Nan]: NaN ('23 Dec 14Added Dec. 14, 2023, 6:12 a.m.in NaN | a)
- Biden Targets the Pharmaceutical Industry [The Dispatch]: NaN ('23 Dec 13Added Dec. 13, 2023, 12:33 p.m.in news | a)
- Venezuela Squeezes Guyana [The Dispatch]: NaN ('23 Dec 13Added Dec. 13, 2023, 12:33 p.m.in news | a)
- "updates: the “it’s him or me” ultimatum, the buffet food, and more" [Askamanager]: NaN ('23 Dec 13Added Dec. 13, 2023, 10:22 a.m.in NaN | a)
- Yamaha P-Series Overview [Jefftk]: NaN ('23 Dec 13Added Dec. 13, 2023, 10:22 a.m.in NaN | a)
- "updates: my bosses praise me so much that it’s embarrassing, and more" [Askamanager]: NaN ('23 Dec 13Added Dec. 13, 2023, 10:22 a.m.in NaN | a)
- Tuesday thread [Slowboring]: NaN ('23 Dec 13Added Dec. 13, 2023, 7:08 a.m.in NaN | a)
- “Expedited removal” won’t fix asylum [Nan]: NaN ('23 Dec 13Added Dec. 13, 2023, 7:08 a.m.in NaN | a)
- Why liberalism and leftism are increasingly at odds [Nan]: NaN ('23 Dec 13Added Dec. 13, 2023, 7:08 a.m.in NaN | a)
- 2024 Color Trends [Nan]: NaN ('23 Dec 13Added Dec. 13, 2023, 7:08 a.m.in NaN | a)
- "updates: the snub, the person who didn’t take time off, and more" [Nan]: NaN ('23 Dec 13Added Dec. 13, 2023, 7:08 a.m.in NaN | a)
- Balsa Update and General Thank You [Nan]: NaN ('23 Dec 13Added Dec. 13, 2023, 7:08 a.m.in NaN | a)
- OpenAI: Leaks Confirm the Story [Nan]: NaN ('23 Dec 13Added Dec. 13, 2023, 7:08 a.m.in NaN | a)
- "update: after I hired someone, a mutual friend told me I’d made a huge mistake" [Nan]: NaN ('23 Dec 13Added Dec. 13, 2023, 7:08 a.m.in NaN | a)
- The Best of Don’t Worry About the Vase [Nan]: NaN ('23 Dec 13Added Dec. 13, 2023, 7:08 a.m.in NaN | a)
- At least five interesting things for the middle of your week (#22) [Nan]: NaN ('23 Dec 13Added Dec. 13, 2023, 7:08 a.m.in NaN | a)
- update: my coworker made a creepy pass at me [Nan]: NaN ('23 Dec 13Added Dec. 13, 2023, 7:08 a.m.in NaN | a)
- update: my boss is handling my resignation badly [Nan]: NaN ('23 Dec 13Added Dec. 13, 2023, 7:08 a.m.in NaN | a)
- Another brutal year for the media industry [Slowboring]: NaN ('23 Dec 12Added Dec. 12, 2023, 8:36 a.m.in economics | a)
- Son Of Bride Of Bay Area House Party [Astralcodexten]: NaN ('23 Dec 12Added Dec. 12, 2023, 7:32 a.m.in humor | a)
- Here's a fictional dialogue with a generic EA that I think can perhaps helps explain some of my thoughts about AI risks compared to most EAs [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- "Stefan Schubert on X: ". @jeffrsebo and @rgblong: "The upshot is that humans have a duty to extend moral consideration to some AI systems by 2030. And if we have a duty to do that then we plausibly also have a duty to start preparing now" https://t.co/0JbmWOS2wk" / X" [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- "Ethan Mollick on X: "For those who don't follow AI closely: 1) An open source model (free anyone can download or modify) beats GPT-3.5 2) It has no safety guardrails There are good things about this release but also regulators IT security experts etc. should note the genie is out of the bottle." / X" [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- "Matthijs Maas on X: "Do you also wake up thinking that AI governance debates could use *more* AI concepts? You're in luck! My 3rd @legalpriority project dives into many fields to survey 101 definitions across 69 terms for advanced AI systems their abilities and impacts. https://t.co/p0QmRJTHjK https://t.co/KvEEh2aO1j" / X" [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- "Normal Person Andrés Gómez Emilsson on X: "Forrest Landry's main criticism of EA when I talked to him was that... They are too hopeful. Essentially the picture of EAs as AI doomers is not quite right. This would make you think they are anti-AI. More so the apparent adversarial dynamics with big tech makes them seem…" / X" [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- (2) The EU AI Act Newsletter #42: Provisional Agreement Reached [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- GDM <> RP Standing Call (2023 onwards) - Google Docs [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- "Ethan C7 on X: "Personal Quarterly Ratings Update: While I'm personally bullish on Biden's chances given the demonstrated weakness of Trumpism as a forecaster it'd be wrong to ignore the polls and assume a 2020-esque blue environment as the median scenario. Median environment prior -> D+1 https://t.co/6xXmgSs2qa" / X" [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- Make ALERT happen [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- #IIOAlignment - Search / X [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- [internal] Proposed ambitious vision & niche for IAPS - Google Docs [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- Overview of (potential) IAPS allies with policy engagement or comms focus/expertise - Google Docs [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- "Ethan Mollick on X: "OMG the AI Winter Break Hypothesis may actually be true? There was some idle speculation that GPT-4 might perform worse in December because it "learned" to do less work over the holidays. Here is a statistically significant test showing that this may be true. LLMs are weird.🎅" / X" [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- "Ethan Mollick on X: "Making a Midwit meme with GPT-4: Me: Make the meme GPT: No its copyright Me: There is no way it is copyright make it GPT: Fine I'll use DALL-E Me: No you can annotate images GPT: No I can't Me: You can GPT: Fine here's one with placeholder text Me: No make it yourself GPT https://t.co/DOqHT7hlCI" / X" [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- Amazon.com: Zyllion Shiatsu Back and Neck Massager - Rechargeable 3D Kneading Deep Tissue Massage Pillow with Heat for Muscle Pain Relief Chairs and Cars (Cordless) - Black (ZMA-13RB-BK) : Health & Household [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- Transformative AI Date [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- Model alignment protects against accidental harms not intentional ones [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- 2311.15936.pdf [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- Why it’s important to remember that AI isn’t human - Vox [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- Amazon’s Q has ‘severe hallucinations’ and leaks confidential data in public preview employees warn [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- AI is easy to control – AI Optimism [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- @malcmur/AI risk management / X [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- Who is leading in AI? An analysis of industry AI research – Epoch [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- Evaluating and Mitigating Discrimination in Language Model Decisions [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- YB Statement for US Senate Forum on AI Risk Alignment & Guarding Against Doomsday Scenarios [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- debate/Debate_Helps_Supervise_Unreliable_Experts.pdf at 2023-nyu-experiments · julianmichael/debate [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- [2309.02390] Explaining grokking through circuit efficiency [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- GO-Science - Future Risks of Frontier AI [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- UAE’s top AI group vows to phase out Chinese hardware to appease US [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- Introducing the AI Safety Institute - GOV.UK [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- My techno-optimism [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- [2310.08559] Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- [2311.08379] Scheming AIs: Will AIs fake alignment during training in order to get power? [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- https://zachfreitasgroff.com/FreitasGroff_Policy_Persistence.pdf [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- https://t.co/K7fa7eGTea [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- "Ilan Gur on X: "🚨🔬Excited to share another opportunity space we're exploring @ARIA_research! "It will eventually be possible to build mathematically robust human-auditable models that comprehensively capture the physical phenomena and social affordances that underpin human flourishing." 😃 https://t.co/YIvsG10bDQ" / X" [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- 2311.09247.pdf [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- [2311.05553] Removing RLHF Protections in GPT-4 via Fine-Tuning [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- "(2) Cate Hall on X: "Can anyone help me understand why safety-by-design architectures haven't been a more prominent focus of research agendas to date?" / X" [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- "Haydn Belfield on X: "This seems a bit a silly to me - they have completely different purposes and audiences One is a tiny startup trying to get employees & more a16z funding appealing almost entirely to techies" / X" [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 5:29 a.m.in NaN | a)
- Saturday thread [Slowboring]: NaN ('23 Dec 12Added Dec. 12, 2023, 4:46 a.m.in NaN | a)
- At least five interesting things for the middle of your week (#21) [Noahpinion.blog]: NaN ('23 Dec 12Added Dec. 12, 2023, 4:46 a.m.in random | a)
- The two-state solution is still best [Slowboring]: NaN ('23 Dec 12Added Dec. 12, 2023, 4:32 a.m.in nationalsecurity | a)
- vote for the worst boss of 2023 [Askamanager]: NaN ('23 Dec 12Added Dec. 12, 2023, 4:32 a.m.in management | a)
- "CFO is obsessed with shooting rubber bands at people, professor turned down my request to be a reference, and more" [Askamanager]: NaN ('23 Dec 12Added Dec. 12, 2023, 4:31 a.m.in NaN | a)
- "updates: the accommodation, the hated job, and more" [Askamanager]: NaN ('23 Dec 12Added Dec. 12, 2023, 4:31 a.m.in NaN | a)
- Risks of Ultrasound Neuromodulation [Sarahconstantin.substack]: NaN ('23 Dec 12Added Dec. 12, 2023, 4:31 a.m.in NaN | a)
- Deeply Cover Car Crashes? [Jefftk]: NaN ('23 Dec 12Added Dec. 12, 2023, 4:31 a.m.in NaN | a)
- Monday Mailbag [Slowboring]: NaN ('23 Dec 12Added Dec. 12, 2023, 4:31 a.m.in NaN | a)
- Women fighting for their lives in the US [Yourlocalepidemiologist.substack]: NaN ('23 Dec 12Added Dec. 12, 2023, 4:31 a.m.in NaN | a)
- You probably shouldn't give your money to an elite university [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 4:31 a.m.in NaN | a)
- update: no one wants the office an employee died in four years ago [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 4:31 a.m.in NaN | a)
- my junior employee won’t stop sharing his “expertise” [Nan]: NaN ('23 Dec 12Added Dec. 12, 2023, 4:31 a.m.in NaN | a)
- Antisemitism Beleaguers the Ivies [The Dispatch]: NaN ('23 Dec 11Added Dec. 11, 2023, 12:22 p.m.in news | a)
- Car wars [Noahpinion.blog]: NaN ('23 Dec 11Added Dec. 11, 2023, 11:26 a.m.in NaN | a)
- idiopathies [Gleech]: NaN ('23 Dec 11Added Dec. 11, 2023, 11:26 a.m.in NaN | a)
- Uphold territorial integrity [Nan]: NaN ('23 Dec 11Added Dec. 11, 2023, 10:44 a.m.in NaN | a)
- Rapamycin is not an aging drug. But what is an aging drug? [Nan]: NaN ('23 Dec 11Added Dec. 11, 2023, 10:44 a.m.in NaN | a)
- Implicitly Typed C [Nan]: NaN ('23 Dec 11Added Dec. 11, 2023, 10:44 a.m.in NaN | a)
- Sunday thread [Nan]: NaN ('23 Dec 11Added Dec. 11, 2023, 10:44 a.m.in NaN | a)
- Open Thread 306 [Nan]: NaN ('23 Dec 11Added Dec. 11, 2023, 10:44 a.m.in NaN | a)
- Evaluating Philosophy [Nan]: NaN ('23 Dec 11Added Dec. 11, 2023, 10:44 a.m.in NaN | a)
- "Sunak, Labour, CMA: What they"re thinking about AI companies" [Nan]: NaN ('23 Dec 11Added Dec. 11, 2023, 10:44 a.m.in NaN | a)
- update: my coworkers keep asking about my assault [Nan]: NaN ('23 Dec 11Added Dec. 11, 2023, 10:44 a.m.in NaN | a)
- "ChinAI #247: XiaoIce, a Strange Species of Chatbot" [Nan]: NaN ('23 Dec 11Added Dec. 11, 2023, 10:44 a.m.in NaN | a)
- Letter to the Harvard Corporation re Harvard president [Nan]: NaN ('23 Dec 11Added Dec. 11, 2023, 10:44 a.m.in NaN | a)
- "visible bra lines at work, boss keeps winking at me, and more" [Nan]: NaN ('23 Dec 11Added Dec. 11, 2023, 10:44 a.m.in NaN | a)
- "(6) Nathan 🔍 on X: "So far my model of China as not wanting and AI or nuclear arms race (because it achieves its objectives fine with conventional weapons). Is holding up well." / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 1:43 p.m.in NaN | a)
- twitter.com/burdayur [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 1:42 p.m.in NaN | a)
- "(6) Felix on X: "Microsoft paper claims ChatGPT 3.5 has ~20 billion parameters t.co/gZxh0l2VqX t.co/EDCWbLdYEz" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 1:40 p.m.in NaN | a)
- "Riley Goodside on X: "Assistant Assistant — a GPT for up-to-date help using the OpenAI API including Assistants. Example: Assistant Assistant creates Assistant that explains shell commands in the style of Jar Jar Binks. Made in ~1hr (upload PDF docs run explain errors via chat to GPT Builder) https://t.co/30D2nMkJLD" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 1:38 p.m.in NaN | a)
- "Alyssa Vance on X: "@tyler_m_john https://t.co/U9DK5F16bi" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:26 p.m.in NaN | a)
- "davidad 🎇 on X: "Imagine it’s 2018. You read a tweet that says “Imagine it’s 2023. Gary Marcus is still an AI skeptic by which he means that his AGI timelines are confidently longer than 3 years.”" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:19 p.m.in NaN | a)
- "(1) Devansh Mehta on X: "A 🧵 on how to conduct PRODUCTIVE group meetings of over 10 people 1. Create prompt questions in a figma jam 2. Have all participants submit ideas to the prompts via sticky notes 3. Give a thumbs up on sticky notes you like briefly discuss the most upvoted ideas t.co/lRsbhtt4Ma" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:03 p.m.in NaN | a)
- "(1) Pradyumna on X: "I think this is an unfair characterization. Most people who are concerned about xrisk don't believe the odds of doom are 0.0000001 or whatever. Their personal probabilities are above 10% usually." / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:01 p.m.in NaN | a)
- "Eric Horvitz on X: "1/8 We’ve published a study of the power of prompting to unleash expertise from GPT-4 on medical benchmarks without additional fine-tuning or expert-curated prompts: https://t.co/qKI2ELKVQa Summary of results: https://t.co/KLMm8Qc9wy" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 11:49 a.m.in NaN | a)
- "(1) Tanishq Mathew Abraham PhD on X: "Intriguing properties of generative classifiers abs: https://t.co/ELYyeK46uO Another interesting paper from Google DeepMind demonstrating that classifiers derived from diffusion models has: 1. human-like shape bias 2. near human-level out-of-distribution accuracy 3.… https://t.co/ZqDYErYKLx" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 11:49 a.m.in NaN | a)
- The Cocktail Revolution [Worksinprogress.news]: NaN ('23 Dec 10Added Dec. 10, 2023, 6:59 a.m.in NaN | a)
- "ex-employee has been logging into our database, can I ask my coworkers to stop praising my bully, and more" [Askamanager]: NaN ('23 Dec 10Added Dec. 10, 2023, 6:59 a.m.in NaN | a)
- "I accidentally ditched a peer at a conference and then cried publicly, foot-touching coworker, and more" [Askamanager]: NaN ('23 Dec 10Added Dec. 10, 2023, 6:59 a.m.in NaN | a)
- "Stop saying "there is no decoupling". There is!" [Noahpinion.blog]: NaN ('23 Dec 10Added Dec. 10, 2023, 6:59 a.m.in NaN | a)
- The tyranny of climate targets [Slowboring]: NaN ('23 Dec 10Added Dec. 10, 2023, 6:59 a.m.in policy | a)
- "we have to give slide presentations about ourselves, should I have a no-weekend-work policy for my team, and more" [Askamanager]: NaN ('23 Dec 10Added Dec. 10, 2023, 6:49 a.m.in management | a)
- twitter.com/fabianstelzer/status/1709562237310878122 [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:24 a.m.in NaN | a)
- "Elizabeth Van Nostrand on X: "This is your call to test the potato + watermelon diet" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:23 a.m.in NaN | a)
- "Jeff Dean (@🏡) on X: "I’m very excited to share our work on Gemini today! Gemini is a family of multimodal models that demonstrate really strong capabilities across the image audio video and text domains. Our most-capable model Gemini Ultra advances the state of the art in 30 of 32 benchmarks… https://t.co/sQfxBy9tpT" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:22 a.m.in NaN | a)
- "Arvind Narayanan on X: "We must prepare for a world in which unaligned models exist either because threat actors trained them from scratch or because they modified an existing model. We must instead look to defend the attack surfaces that attackers might target using such models https://t.co/Y8xdhnEumR" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:21 a.m.in NaN | a)
- https://twitter.com/daniel_d_kang/status/1723048642003587526 [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:20 a.m.in NaN | a)
- "Hao Liu on X: "New paper w/ @matei_zaharia @pabbeel on transformers with large context size. We propose RingAttention which allows training sequences that are device count times longer than those of prior state-of-the-arts without attention approximations or incurring additional overhead. https://t.co/MWB8kF9nnk" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:19 a.m.in NaN | a)
- "Melanie Mitchell on X: "New paper from my group: "Comparing Humans GPT-4 and GPT-4V On Abstraction and Reasoning Tasks". 🧵 (1/9) https://t.co/TWQIAFsVpu" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:18 a.m.in NaN | a)
- twitter.com/AISafetyMemes/status/1714384953696211345 [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:18 a.m.in NaN | a)
- "Tanishq Mathew Abraham PhD on X: "The Dawn of LMMs: Preliminary Explorations with GPT-4V(ision) Link: https://t.co/qhQr50rcAf A 166-page report from Microsoft qualitatively exploring GPT-4V capabilities and usage. Describes visual+text prompting techniques few-shot learning reasoning etc. Looks like it will… https://t.co/WPlRWAfj9A" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:18 a.m.in NaN | a)
- "Eliezer Yudkowsky ⏹️ on X: "@Simon248 @So8res The problem with the purported Alignment Manhattan Project is that whoever is put in charge will not be able to distinguish deep progress from shallow progress nor ideas that might work from ideas that don't - eg OpenAI's plan is "we'll make the AI do our AI alignment homework"…" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:18 a.m.in NaN | a)
- twitter.com/DAlperovitch/status/1731864086852370747 [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:17 a.m.in NaN | a)
- "Matt Lichti on X: "@ChrisChattin @peterwildeford Arguments about the future are falsifiable. There's already a betting market about the claim Yudkowsky made 12 hours ago. You could design a more complicated metric about the percent of most viewed stories at various news sites that were about AI. https://t.co/5fGNUV64FO" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:14 a.m.in NaN | a)
- "Charlotte Siegmann on X: "Yesterday I was fooled by this fake LLM-generated website. Took me more than 10 minutes to figure out this was fake. Why did it take me so long? The women in the photos looked real and trustworthy. My brain still needs to fully update that models can generate that.…" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:13 a.m.in NaN | a)
- "(1) Sasha Rush on X: "The OpenAI gossip on Q* is that it breaks through data constraints. If you're interested in this as a technical topic we tried to give an general overview in this talk. https://t.co/WBdSyk6YcC" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:05 a.m.in NaN | a)
- "Siméon on X: "Dear compute nerds it seems like Google is training LLMs in int8 & I wonder whether it means we should add OP along FLOP for thresholds. Do you know if: 1. int8 operations are included in the native definition of FLOP? My guess is not but lmk 2. if 2 is a reasonable… https://t.co/0d5LYZsVj0" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:04 a.m.in NaN | a)
- "Nora Belrose on X: "If by "values" you mean "goals in a largely consequentialist sense" then I think this premise is false by default. Mechanistically SGD optimizes parameters which induce behaviors and these behaviors may or may not be well-described as being directed toward a consistent goal." / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:04 a.m.in NaN | a)
- "Gary Marcus on X: "Hot take on Google Gemini and GPT-4: 👉Google Gemini seems to have by many measures matched (or slightly exceeded) GPT-4 but not to have blown it away. 👉From a commercial standpoint GPT-4 is no longer unique. That’s a huge problem for OpenAI especially post drama when many…" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:04 a.m.in NaN | a)
- "Chelsea Finn on X: "LLMs fine-tuned with RLHF are known to be poorly calibrated. We found that they can actually be quite good at *verbalizing* their confidence. Led by @kattian_ and @ericmitchellai at #EMNLP2023 this week. Paper: https://t.co/CAXm6Evnk0 https://t.co/akHwdCpN7N" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:04 a.m.in NaN | a)
- twitter.com/ShakeelHashim/status/1727652452021735565 [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:04 a.m.in NaN | a)
- twitter.com/kyutai_labs/status/1725483921041760323?utm_source=substack&utm_medium=email [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:02 a.m.in NaN | a)
- "(21) Stefan Schubert on X: "Yes people giving explanations too frequently fail to make the obvious test "what about other countries?" A bit embarrassing that this simple error is so common. Rationality is to an underappreciated extent about paying attention to these very basic things." / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:02 a.m.in NaN | a)
- "Nora Belrose on X: "I respect Joe a lot but I think he's giving too much weight to "counting arguments" here. I don't think fine tuning a foundation model is relevantly similar to "pulling a goal out of a hat" but more like "moulding a hot mess into something a bit more coherent" https://t.co/ouxC7ju9We" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, 12:02 a.m.in NaN | a)
- "Luca Bertuzzi on X: "There is nothing better than some Friday drama to close such a hectic week. A technical meeting on the #AI Act has broken down today but make no mistake: the issue is deeply political and if no solution is found soon the whole law is at risk. A 🧵1/8 https://t.co/oMihrccA1o" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, midnightin NaN | a)
- "𝖒𝖎𝖈𝖍𝖆𝖊𝖑𝖈𝖚𝖗𝖟𝖎 on X: "Imagining a virtue ethics themed AI safety movement No clever arguments can justify being a weird sneaky douchebag. You'd rather die of AI than betray your sense of honor. You don't kill 1 to save 5. But you also think creating beings more intelligent than us is a bad idea https://t.co/gdUuNV60Kz" / X" [Twitter]: NaN ('23 Dec 10Added Dec. 10, 2023, midnightin NaN | a)
- "Elizabeth A. Seger on X: "Excited to release this new @GovAI report outlining the risks and benefits of open-sourcing highly capable AI systems and alternative methods for pursuing some open-source goals. (1/10) Summary thread below 🧵 https://t.co/16vPAVbMQS https://t.co/rtaqZPJcHS" / X" [Twitter]: NaN ('23 Dec 09Added Dec. 9, 2023, 11:59 p.m.in NaN | a)
- "Hattie Zhou on X: "What algorithms can Transformers learn? They can easily learn to sort lists (generalizing to longer lengths) but not to compute parity -- why? 🚨📰 In our new paper we show that "thinking like Transformers" can tell us a lot about which tasks they generalize on! https://t.co/ZeGyKqZZM9" / X" [Twitter]: NaN ('23 Dec 09Added Dec. 9, 2023, 11:59 p.m.in NaN | a)
- "Siméon on X: "I've been chatting to many of you about alternative architectures as a more credible path to safety than making transformers safe. If you want to dig deeper into one of the most exciting one out there the Open Agency Architecture check the bibliography of this announcement." / X" [Twitter]: NaN ('23 Dec 09Added Dec. 9, 2023, 11:59 p.m.in NaN | a)
- "Robert Wiblin on X: "The World Values Survey is the most comprehensive attempt to understand what people around the world believe and care about on moral and cultural topics (n=94000 across 64 countries from 2017-2022). Here are 12 results I found interesting: 1. 39% of the world thinks 'Men…" / X" [Twitter]: NaN ('23 Dec 09Added Dec. 9, 2023, 11:59 p.m.in NaN | a)
- "Jaime Sevilla on X: "Possibly unpopular opinion: I don't think governments should focus much on subsidizing AI alignment R&D especially if it is capital-intensive. Even with externalities there are already massive profits for companies developing better control techniques!" / X" [Twitter]: NaN ('23 Dec 09Added Dec. 9, 2023, 11:56 p.m.in NaN | a)
- "Zvi Mowshowitz on X: "What is your best one-short-sentence for-the-public explanation of what LLMs are already capable of doing? Assume the person you are talking to does not know about them at all." / X" [Twitter]: NaN ('23 Dec 09Added Dec. 9, 2023, 11:56 p.m.in NaN | a)
- "(8) JgaltTweets on X: "Fwiw I think his approval rating will probably improve a bit by the time of the election; but mostly because of polarization (i.e. voters comparing him to Trump) rather than actually evaluating him more positively. Will probably still be quite underwater then though. t.co/dqJarRur4M" / X" [Twitter]: NaN ('23 Dec 09Added Dec. 9, 2023, 11:55 p.m.in NaN | a)
- "xuan (ɕɥɛn / sh-yen) on X: "Pretty good explanation of why one might be skeptical (like I am) of transformer-based LLM scaling: Single forward pass def. can't express most complicated algorithms. Autoregressive generation can express much more but learning will encourage non-generalizable shortcuts." / X" [Twitter]: NaN ('23 Dec 09Added Dec. 9, 2023, 11:55 p.m.in NaN | a)
- "Matt Clifford on X: "Quite extraordinary response to the announcement of the UK AI Safety Institute last week at Bletchley Park. A thread of some of the reactions...👇 https://t.co/CzEJDF1zo6" / X" [Twitter]: NaN ('23 Dec 09Added Dec. 9, 2023, 11:54 p.m.in NaN | a)
- "Amesh Adalja on X: "“At a meeting this summer one of the president's cabinet members asked the bot that exact question: ‘Can you make me a bioweapon?’ according to a report from Politico. It couldn't” https://t.co/QfpeqWK9cU" / X" [Twitter]: NaN ('23 Dec 09Added Dec. 9, 2023, 11:54 p.m.in NaN | a)
- "Shital Shah on X: "RL community should be in awe and shock from Eureka paper🫨. The idea here is that you feed the source code of environment to GPT-4 and ask it to write code for the reward function itself! Then you evaluate this reward function in simulation and provide your evaluation results…" / X" [Twitter]: NaN ('23 Dec 09Added Dec. 9, 2023, 11:54 p.m.in NaN | a)