<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Deconfusion device]]></title><description><![CDATA[Failing to understand the world, learning a little along the way]]></description><link>https://blog.joshuablake.co.uk</link><generator>Substack</generator><lastBuildDate>Wed, 08 Apr 2026 20:03:39 GMT</lastBuildDate><atom:link href="https://blog.joshuablake.co.uk/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Joshua Blake]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[deconfusiondevice@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[deconfusiondevice@substack.com]]></itunes:email><itunes:name><![CDATA[Joshua Blake]]></itunes:name></itunes:owner><itunes:author><![CDATA[Joshua Blake]]></itunes:author><googleplay:owner><![CDATA[deconfusiondevice@substack.com]]></googleplay:owner><googleplay:email><![CDATA[deconfusiondevice@substack.com]]></googleplay:email><googleplay:author><![CDATA[Joshua Blake]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Advice for (potential) PhD students]]></title><description><![CDATA[What I wish I knew in 2019]]></description><link>https://blog.joshuablake.co.uk/p/advice-for-potential-phd-students</link><guid isPermaLink="false">https://blog.joshuablake.co.uk/p/advice-for-potential-phd-students</guid><dc:creator><![CDATA[Joshua Blake]]></dc:creator><pubDate>Fri, 28 Jun 2024 15:41:39 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5d7fe3f6-7cda-4aff-b222-e101c9b3501a_1024x1024.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Here are some things I wish I knew at the start of my PhD. Better routines and knowing when to stop a project would have made me more effective, so I hope this helps others avoid similar mistakes.</p><p>For background, I&#8217;ve just finished a PhD at the intersection of Bayesian statistics and infectious disease epidemiology, based at the University of Cambridge, UK.</p><h1><strong>Be cautious about advice</strong></h1><p>Before I begin, a warning. PhD experiences vary widely, influenced by numerous factors such as country, university, supervisor, field, and your unique personality and work style. Country especially: American and UK PhDs are such different experiences they should be considered different things. Therefore, take all advice, including mine, with a large grain of salt. Don&#8217;t be surprised if you hear conflicting advice or experiences from different people, use the parts that work for you.</p><p>Focus on the advice relevant to you. For example, if you have the same supervisor as someone else, seek tips on effective interaction with them. If you're in the same department, identify who is valuable to talk to. Relatedly, seek advice from people sharing your supervisor / department / field where possible.</p><p>Along those lines, here is some other advice I think is worth reading.</p><ul><li><p><a href="https://athowes.github.io/posts/2024-04-01-phd-advice/">Adam Howes</a>: I basically agree with him on everything. The one point of disagreement is that I&#8217;m not sure you need to be thinking as strategically as he says with your research, especially in your first year.</p></li><li><p><a href="https://forum.effectivealtruism.org/posts/B7AQF7HNiLRbKMKJt/how-to-phd">eca (pseudonymous)</a>: I disagree with this for <a href="https://forum.effectivealtruism.org/posts/B7AQF7HNiLRbKMKJt/how-to-phd?commentId=kQzC4FNd3Cncczags">the same reasons as Adam Gleave</a>. Like him, the academics I worked with didn&#8217;t respond strongly to incentives, although I haven&#8217;t been in industry to make that comparison.</p></li><li><p><a href="https://docs.google.com/document/d/16xQ4N4ubRpy6Rj1LCESjb2On4vbUlvoz9ZU2YL-IrEI/edit">Sandy Hickson</a>: focuses on the process of choosing / getting a PhD. While going as deep as him on choice is probably good, the vast majority of people don&#8217;t (I met my supervisor once, for 30 mins, although I was definitely making a choice with too little information). I was a non-international student and applied to an already-funded project, so I can&#8217;t comment on that part.</p></li></ul><h1><strong>Choosing a PhD</strong></h1><p>Consider your PhD choice as seriously as a job decision. Evaluate the "boss" (supervisor) and "team" (research group) dynamics. Evaluate the culture within the research group. How collaborative is it, both within the group and externally? Understand your potential supervisor's management style and the level of attention they will provide. Often, a postdoc will supervise your work instead of your main supervisor. Hands-on supervisors provide more guidance but less freedom. This is probably the most crucial part of your choice. The best people to talk to will be recent students of your potential supervisor.</p><p>You might want to consider what you&#8217;re hoping to get out of the PhD. One extreme is people who just want the credential (e.g. they want a job that requires a PhD but the details don&#8217;t matter). In that case, choose a programme where students reliably finish on time. Your goals might not align with your supervisor's, so be prepared to assert your needs. The other extreme is those that are sure they want to pursue a career in academic research afterwards. Then, getting good publication(s) is very important; for fields where author order matters (e.g. life sciences) these should be with you as first author. Assess where your potential supervisor, and their recent students, have published; often, one publication in a top journal is worth multiple in lower journals. Most people fall somewhere between these extremes, and aren&#8217;t sure what they want to pursue post-PhD. Consider different aspects of your potential projects and find a balance; remember, no project is perfect.</p><p>The key part of a PhD is gaining knowledge and skills; ideally, transferable ones. This is <em>far</em> more important than the impact or usefulness of what your research is within your PhD (although that&#8217;s good too). Therefore, being in the right general area using the right methods is more important than the specific project. The skills, connections, and ideas you gain will open doors later.</p><p>Remember, your research direction might evolve in unexpected ways.</p><h1><strong>Doing a PhD</strong></h1><p>A regular work routine helps you stay organised and manage time effectively. Whether you prefer traditional 9-to-5 hours or a schedule that suits your lifestyle, having a consistent routine will also help you maintain a healthy work-life balance.</p><p>Setting clear goals and getting your supervisor's support is crucial for a successful PhD. Don't be afraid to challenge your supervisor's ideas if necessary, but do so respectfully and thoughtfully. Changing supervisors can be disruptive, so it's important to weigh the pros and cons carefully before making a decision.</p><p>Be wary of the never-ending nature of research projects. It's important to recognize that there will always be limitations and potential extensions to your work. If you're starting to lose motivation, consider wrapping up your current work into a coherent piece, such as a paper or thesis chapter, before moving on. If your supervisor disagrees with your assessment of the project's completion, ask if you can take some time to write up what you have. It's often easier to discuss what's missing and whether there's enough material for a paper once it's in draft form.</p><p>During your PhD, you have a lot of freedom. Use it to explore side projects like collaborative research, outreach, or internships. These experiences can help you determine your career path after graduation. While the number of options can be overwhelming, try a few different things to find what interests you the most. As David Allen said, "you can do anything you want, but not everything."</p><p>If you want to stay in research, start thinking about future research directions early and <strong>write them down</strong>, you won&#8217;t remember them in six months. Identify external collaborators and compare potential projects based on the effort required, impact, your motivation, your comparative advantage in executing them, and how much you&#8217;ll learn from them.</p><p>PhD days are incredibly unstructured, having two hours of scheduled time per week (a group meeting and a supervision meeting) is common. A particular challenge is the very high proportion of <a href="https://www.pasteurscube.com/notes-and-reflections-on-deep-work-by-cal-newport/">deep work</a>: stuff you can only do with intense focus. It&#8217;s often worth just taking an hour or two off for a nap or a walk then going back to something, rather than forcing yourself to stare at a screen without making progress.</p><p>Find a working style that suits you. For me that was prioritising the most important task each morning and then doing something I&#8217;m more excited by, but not necessarily that important, in the afternoon. This approach can enhance productivity and maintain motivation. I strongly recommend maintaining a list of useful or interesting things that would be helpful to do or read, but aren&#8217;t necessarily that important.</p><h1><strong>Writing your thesis</strong></h1><p>Writing a thesis is daunting; I struggled, and most people do. Everyone tells you to complete projects and draft papers or chapters as you go; this is great advice few people follow-through on. Maybe you can buck that trend.</p><p>Allow more time than you think for writing. Writing a chapter from scratch is hard, each one took me around two weeks. You&#8217;ll then have a bunch of comments to address and revisions to make. Starting from a pre-existing paper, it would take me around a week to write the chapter and there'd be far fewer issues with it. It&#8217;s better to have a draft early then add stuff to it than to overrun. See also: earlier comments about never-ending research.</p><h1><strong>Sitting your viva</strong></h1><p>The viva (or defence) is daunting, but doesn&#8217;t influence much: the majority of your corrections will be decided beforehand. However, your answers during the viva may reduce the extent of them, for example you might be able to convince them that something just needs to be explained better and isn&#8217;t wrong.</p><p>Preparation quickly hits diminishing returns. I spent four or five days in the week beforehand preparing and this felt about right; remember to be well-rested through this period too. I recommend focusing on the details, making sure you are on top of the details (e.g. definitions of technical concepts), and why your arguments are correct (e.g. implicit steps made between steps). Rereading your thesis with fresh eyes can help you find mistakes and identify the most important parts. Other useful activities include going over key papers you have cited or your examiners were involved in, and summarising each chapter and the entire thesis into one page each. This exercise can help you get out of the weeds of the details and think about the big picture.</p><p>Depending on your examiner, the types of questions you may be asked could vary greatly, so it is impossible to prepare for all of them, and hard to predict what they&#8217;ll be. Your supervisor may be able to give you clues about what each examiner will focus on, but it is also important to be prepared for anything. You&#8217;ll normally be asked to give a short summary (3-5 mins) of your work at the start. I don&#8217;t think this is really examined, it&#8217;s just to get you into your flow, but be prepared for it.</p><p>On the day of your viva, it is normal to be nervous. Control whatever you can control: make sure the room is set-up correctly, you have everything you need with you (copy of your thesis, any notes, water, etc.), you&#8217;ve planned out your time through the day, and you remember to eat and drink.</p><p>Finally, remember that they are asking you questions about work you have spent over three years on. There is no one in the world who knows more about this stuff than you do.</p>]]></content:encoded></item><item><title><![CDATA[Forecasting accidentally-caused pandemics]]></title><description><![CDATA[How many does history tell us to expect?]]></description><link>https://blog.joshuablake.co.uk/p/forecasting-accidentally-caused-pandemics</link><guid isPermaLink="false">https://blog.joshuablake.co.uk/p/forecasting-accidentally-caused-pandemics</guid><dc:creator><![CDATA[Joshua Blake]]></dc:creator><pubDate>Wed, 17 Jan 2024 19:25:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ff61430-7499-49c7-9110-569c2f7f0443_1181x787.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Future pandemics could arise from an accident (a pathogen being used in research accidentally infecting a human). The risk from accidental pandemics is likely increasing in line with the amount of research being conducted. In order to prioritise pandemic preparedness, forecasts of the rate of accidental pandemics are needed. Here, I describe a simple model, based on historical data, showing that the rate of accidental pandemics over the next decade is almost certainly lower than that of zoonotic pandemics (pandemics originating in animals).</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.joshuablake.co.uk/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.joshuablake.co.uk/subscribe?"><span>Subscribe now</span></a></p><p>Before continuing, I should clarify what I mean by an accidental pandemic. By 'accidental pandemic,' I refer to a pandemic arising from human activities, but not from malicious actors. This includes a wide variety of activities, including lab-based research and clinical trials or more unusual activities such as hunting for viruses in nature.</p><p>The first consideration in the forecast is the historic number of accidental pandemics. One historical pandemic (<a href="https://en.wikipedia.org/wiki/1977_Russian_flu">1977 Russian flu</a>) is widely accepted to be due to research gone wrong, with the leading hypothesis being a clinical trial. The estimated death toll from this pandemic is 700,000. The origin of the COVID-19 pandemic <a href="https://www.vox.com/22453571/lab-leak-covid-19-coronavirus-hypothesis-wuhan-virology-china">is disputed</a>, and I won&#8217;t go further into that argument here. Therefore, historically, there have been one or two accidental pandemics.</p><p>Next, we need to consider the amount of research that could cause such a pandemics, or the number of &#8220;risky research units&#8221; that have been conducted. No good data exists on risky research units directly. However, we only need a measure that is proportional to the number of experiments.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> I consider three indicators: publicly reported lab accidents, as collated by <a href="https://f1000research.com/articles/10-752">Manheim and Lewis (2022)</a>; the rate at which BSL-4 labs (labs handling the most dangerous pathogens) are being built, gathered by <a href="https://www.globalbiolabs.org/">Global BioLabs</a>; and the number of virology papers being published, categorised by the <a href="https://www.webofscience.com/wos">Web of Science database</a>. I find a good fit with a shared rate of growth at 2.5% per year.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Xamz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F761fd1a9-821c-4d33-8301-1dbc000dc05d_1181x787.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Xamz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F761fd1a9-821c-4d33-8301-1dbc000dc05d_1181x787.png 424w, https://substackcdn.com/image/fetch/$s_!Xamz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F761fd1a9-821c-4d33-8301-1dbc000dc05d_1181x787.png 848w, https://substackcdn.com/image/fetch/$s_!Xamz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F761fd1a9-821c-4d33-8301-1dbc000dc05d_1181x787.png 1272w, https://substackcdn.com/image/fetch/$s_!Xamz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F761fd1a9-821c-4d33-8301-1dbc000dc05d_1181x787.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Xamz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F761fd1a9-821c-4d33-8301-1dbc000dc05d_1181x787.png" width="1181" height="787" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/761fd1a9-821c-4d33-8301-1dbc000dc05d_1181x787.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:787,&quot;width&quot;:1181,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Xamz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F761fd1a9-821c-4d33-8301-1dbc000dc05d_1181x787.png 424w, https://substackcdn.com/image/fetch/$s_!Xamz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F761fd1a9-821c-4d33-8301-1dbc000dc05d_1181x787.png 848w, https://substackcdn.com/image/fetch/$s_!Xamz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F761fd1a9-821c-4d33-8301-1dbc000dc05d_1181x787.png 1272w, https://substackcdn.com/image/fetch/$s_!Xamz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F761fd1a9-821c-4d33-8301-1dbc000dc05d_1181x787.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Number of events per year in each of the three datasets (dots). Lines show the line of best fit from a Poisson regression, with 95% prediction interval.</figcaption></figure></div><p>A plateau in the number of virology papers in the Web of Science database is plausibly visible. It is too early to tell if this trend will feed through to the number of labs or datasets, but this is a weakness of this analysis. However, a similar apparent plateau is visible in the 1990s, yet growth then appeared to restart along the previous trendline.</p><p>The final step is to extrapolate this growth in risky research units and see what it implies for how many accidental pandemics we should expect to see. Below I plot this: the average (expected) number of pandemics per year. Two scenarios are considered: where the basis is one historical accidental pandemic (1977 Russian flu) and where the basis is two historical accidental pandemics (adding COVID-19). For comparison, I include the historic long-run average number of pandemics per year, 0.25.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pqQE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ff61430-7499-49c7-9110-569c2f7f0443_1181x787.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pqQE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ff61430-7499-49c7-9110-569c2f7f0443_1181x787.png 424w, https://substackcdn.com/image/fetch/$s_!pqQE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ff61430-7499-49c7-9110-569c2f7f0443_1181x787.png 848w, https://substackcdn.com/image/fetch/$s_!pqQE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ff61430-7499-49c7-9110-569c2f7f0443_1181x787.png 1272w, https://substackcdn.com/image/fetch/$s_!pqQE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ff61430-7499-49c7-9110-569c2f7f0443_1181x787.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pqQE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ff61430-7499-49c7-9110-569c2f7f0443_1181x787.png" width="1181" height="787" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2ff61430-7499-49c7-9110-569c2f7f0443_1181x787.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:787,&quot;width&quot;:1181,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pqQE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ff61430-7499-49c7-9110-569c2f7f0443_1181x787.png 424w, https://substackcdn.com/image/fetch/$s_!pqQE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ff61430-7499-49c7-9110-569c2f7f0443_1181x787.png 848w, https://substackcdn.com/image/fetch/$s_!pqQE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ff61430-7499-49c7-9110-569c2f7f0443_1181x787.png 1272w, https://substackcdn.com/image/fetch/$s_!pqQE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ff61430-7499-49c7-9110-569c2f7f0443_1181x787.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Predictions for the mean number of accidental pandemics each year, in comparison to the long-run historical average.</figcaption></figure></div><p>Predictions for the ten years starting with 2024 are in the table below. This gives, for each scenario: the number of accidental pandemics that are expected, a range which the number of pandemics should fall in with at least 80% probability, and the probability of at least one accidental pandemic occurring.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!357g!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0b08803-d8ec-49b7-ab2d-1a52c10bb267_482x154.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!357g!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0b08803-d8ec-49b7-ab2d-1a52c10bb267_482x154.png 424w, https://substackcdn.com/image/fetch/$s_!357g!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0b08803-d8ec-49b7-ab2d-1a52c10bb267_482x154.png 848w, https://substackcdn.com/image/fetch/$s_!357g!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0b08803-d8ec-49b7-ab2d-1a52c10bb267_482x154.png 1272w, https://substackcdn.com/image/fetch/$s_!357g!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0b08803-d8ec-49b7-ab2d-1a52c10bb267_482x154.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!357g!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0b08803-d8ec-49b7-ab2d-1a52c10bb267_482x154.png" width="482" height="154" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b0b08803-d8ec-49b7-ab2d-1a52c10bb267_482x154.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:154,&quot;width&quot;:482,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:10811,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!357g!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0b08803-d8ec-49b7-ab2d-1a52c10bb267_482x154.png 424w, https://substackcdn.com/image/fetch/$s_!357g!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0b08803-d8ec-49b7-ab2d-1a52c10bb267_482x154.png 848w, https://substackcdn.com/image/fetch/$s_!357g!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0b08803-d8ec-49b7-ab2d-1a52c10bb267_482x154.png 1272w, https://substackcdn.com/image/fetch/$s_!357g!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0b08803-d8ec-49b7-ab2d-1a52c10bb267_482x154.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a><figcaption class="image-caption">Predictions for the ten years starting with 2024. The expected number of accidental pandemics, an 80% prediction interval, and the probability of seeing at least one pandemic. Scenarios are assumptions over the number of accidental pandemics that have previously occurred.</figcaption></figure></div><p>Overall, the conclusion from the model is that, for the next decade, the threat of zoonotic pandemics is likely still greater. However, if lab activity continues to increase at this rate, accidental pandemics may dominate.</p><p>The model here is extremely simple, and a more complex one would very likely decrease the number forecast. In particular, this model relies on the following major assumptions.</p><p>First, the actual number of risky research units is proportional to the three indicators chosen. That all three indicators are growing at similar rates lends credence to this view. However, this assumption is, in practice, almost impossible to verify. Each of the datasets used here have issues,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> and further work here would certainly be useful.</p><p>Second, the number of risky research units is growing exponentially, and this will continue over the extrapolation period. The plateau in the number of virology papers being published is the most concerning feature of the data here. This implies that the number of risky research units might be slowing down in growth. On the other hand, increasing access to biological research could cause an increase in risky research, but I think a step-change over the next decade is unlikely.</p><p>Third, the probability of an accidental pandemic per risky research is constant. This seems unlikely. Biosafety (actions to reduce the risk of lab accidents) is becoming more prominent and, like most of society, safety measures are increasing. This is especially true in comparison to the 1970s, when the Russian flu pandemic occurred. In fact, rerunning the above projection but only considering that there was 1 accidental pandemic by 1977 implies a more than 90% probability of two or more accidental pandemics by the present day. However, as risky research is done more broadly (e.g.: in less developed countries) biosafety may decrease. Hence, the current risk of an accidental pandemic per risky research unit is overstated by this analysis, but this is uncertain.</p><p>Fourth, and finally, the occurrence of accidental pandemics is independent. We might expect that, if a future pandemic was confirmed to be leaked from a lab, actions would be taken to reduce the probability of a future one. While this would not affect the view of the probability of at least one accidental pandemic, it should reduce the probability of two or more, and hence the expected number too.</p><p>These factors imply that the model here is overestimating the likely rate of future accidental pandemics. Therefore, it is almost certain that accidental pandemics are currently not the majority of pandemics we&#8217;d expect to see, even based on this model which likely overestimates their frequency. This may change towards the end of the projection period, hence, considerations over improved biosafety are still important.</p><p>In order to fully assess the relative impact of accidental pandemics, compared with other sources, it is also important to consider their severity. In general, I would expect the types of pathogens that research is being conducted on to be similar to those we expect to see pandemics. However, there may be a bias towards more severe pandemics since these are the ones which we would most want to prevent or mitigate.</p><p><em>Thank you</em> <em>for</em> <em>reading to the end. <strong>I am currently looking for a job!</strong> If you think your organisation could benefit from this type of thinking, <a href="mailto:joshbblake@gmail.com">please get in touch</a>.</em></p><p><em>Many thanks to Sandy Hickson and Hena McGhee for commenting on drafts of this post, and the Cambridge Biosecurity Hub for many discussions informing my thinking. I am also grateful to the researchers who released the data I used: David Manheim, Gregory Lewis, the Global Biolabs project, and Web of Science.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.joshuablake.co.uk/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Deconfusion device! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Assuming exponential growth, which appears to be a good fit, growth at the same rate implies a constant multiplier between the different indicators. The <a href="https://blog.joshuablake.co.uk/p/gamma-poisson">gamma-Poisson model</a> employed for the prediction makes the same predictions if the amount of risk being incurred is scaled by a constant multiplier. <a href="https://joshuablake.co.uk/lab-leak-base-rate/lab-leak-base-rates.html">Mathematical details are available in this R Markdown notebook.</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Using the dataset of <a href="https://www.pnas.org/doi/10.1073/pnas.2105482118">Marani et al. (2021)</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>BSL-4 labs are a small subset of all research, and have small numbers. The lab accidents dataset (from Manheim and Lewis) is likely incomplete, and possibly biased. Virology papers do not necessarily track risky research.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Cost-effective pandemic preparedness]]></title><description><![CDATA[In this post, I outline, at a high-level, my thoughts on pandemic preparedness. I cover threats regardless of origin (zoonotic, diseases spread from animals into humans; accidental, those caused by humans without intent; or deliberate, as an act of warfare or terrorism). I hope in future to go into further details about my reasoning, including in a quantitative way. For now, I present them with some intuitive reasoning in footnotes.]]></description><link>https://blog.joshuablake.co.uk/p/cost-effective-pandemic-preparedness</link><guid isPermaLink="false">https://blog.joshuablake.co.uk/p/cost-effective-pandemic-preparedness</guid><dc:creator><![CDATA[Joshua Blake]]></dc:creator><pubDate>Thu, 21 Dec 2023 16:55:55 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1e01ea14-60d4-4cee-b1f8-8629fa3b961d_420x459.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this post, I outline, at a high-level, my thoughts on pandemic preparedness. I cover threats regardless of origin (zoonotic, diseases spread from animals into humans; accidental, those caused by humans without intent; or deliberate, as an act of warfare or terrorism). I hope in future to go into further details about my reasoning, including in a quantitative way. For now, I present them with some intuitive reasoning in footnotes.</p><h1>Summary</h1><ul><li><p><em>Direct<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></em> spending on pandemic preparation might be comparable in cost-effectiveness to the best global health interventions (e.g.: malaria bed nets). This is based on the historic number of deaths from pandemics, but is a good guide to future impact.</p></li><li><p>The most likely reason that future threats will be significantly higher is from non-state malicious actors; however, this is highly uncertain. Promising mitigations may be low-cost and sensible, such as mandatory DNA synthesis screening.</p></li><li><p><em>Indirect </em>philanthropic spending, for instance increasing governmental or private spending on pandemics, is promising. This is especially true for interventions that also decrease the losses due to seasonal epidemics or noninfectious health risks (e.g.: indoor pollution).</p></li><li><p>There are few, if any, scenarios that lead to societal collapse. The only scenario I have seen that seems plausible is a so-called &#8220;stealth&#8221; pathogen (a pathogen is a virus, bacteria, or other microorganism that causes disease). I am extremely uncertain about the plausibility of such a scenario. The proposed response plans are inadequate to mitigate this risk, making tractability low.</p></li><li><p>A major weakness in our pandemic response is inadequate tools. This problem is worsened by early uncertainty in a pandemic, which makes it difficult to calibrate responses accurately. Currently, we have: imprecise and costly tools (e.g.: reducing interactions across society); tools that are hard to implement effectively (e.g.: masks or contact tracing); and tools that scale poorly (e.g.: border controls).</p></li></ul><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.joshuablake.co.uk/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.joshuablake.co.uk/subscribe?"><span>Subscribe now</span></a></p><h1>Introduction</h1><p>Pandemics have historically occurred every 4 years on average, killing, in expectation, 7 in 10,000 of the global population, or about 5.6 million deaths each time.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> This gives an expected death toll of 1.6 million per year.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.joshuablake.co.uk/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Deconfusion device! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>How much should philanthropists be willing to spend to prevent this? <a href="https://www.givewell.org/charities/top-charities">GiveWell estimates global health interventions (e.g.: bednets to prevent malaria) can save a life for around $5,000 on average.</a> Therefore, to be equally cost-effective, preventing all pandemic deaths needs to cost $8 billion per year or less. Alternatively, a marginal benefit reducing deaths by 10% needs to cost no more than $800 million per year. The best opportunities, such as <a href="https://ippsecretariat.org/">the 100 Days Mission</a>, likely cross this bar.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> Many opportunities are unlikely to pass this bar though, such as ongoing surveillance. Quantitative cost-effectiveness analyses are rarely available in this space, but should be encouraged to find other opportunities.</p><p>However, there are various ways more effective opportunities available for philanthropists than spending directly on preparedness.</p><ol><li><p>Leveraging government or private sector spending. Rich world governments will spend $1 million or more to save a life. Spending on pandemic preparedness is much more likely to be cost-effective at this level. If a case can be made for private companies to invest (e.g.: because the interventions will reduce workplace absences) then the counterfactual is probably even better.</p></li><li><p>Deploying interventions that provide ongoing benefits,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> even in the absence of a pandemic. Some exciting ideas in this category are <a href="http://sequencing-roadmap.org/">clinical metagenomic sequencing</a> or <a href="https://progress.institute/indoor-air-quality/">improving indoor air quality</a>. There seems to be interesting trials in both cases here, at least in the UK, with new standards for buildings and <a href="https://www.gov.uk/guidance/siren-study">programmes trialling more widespread metagenomic sequencing</a>.</p></li><li><p>One-off investments that require low ongoing costs. For example, if a new intervention can be developed (e.g.: a contact tracing app) that requires a one-off spend to develop, little maintenance, but can quickly be deployed at the first signs of an outbreak. Here, the one-off spend can accrue long-term benefits. Yet, we must be careful that these investments are not outpaced by technological or societal changes. Many systems developed in the wake of the 2009 H1N1 pandemic were irrelevant only a decade later for COVID-19, when video calling was much more available and mRNA vaccines were rapidly developed.</p></li><li><p>Increases in risk. Many have argued that, for various reasons, the risks of a pandemic are increasing over time, although the evidence-base is weak. For zoonotic risks, there is increased contact with animals due to habitat loss and more factory farming. For accidental risks, there are more labs doing risky research. For deliberate risks, access to the knowledge required to weaponize infectious diseases may become more widespread. The risk from deliberate risks is the most uncertain. Cost-effectiveness is hard to assess but there are low cost interventions (e.g.: DNA synthesis screening) that seem worthwhile.</p></li><li><p>Second-order effects. Some have argued that pandemics could cause long-term effects on humanity, causing societal collapse or extinction. The view that such risks should guide our actions is known as longtermism; <a href="https://joshuablake.co.uk/blog/longtermism-doubts/">I am somewhat sceptical of this view</a>. However, even putting aside my scepticism of longtermism generally, the concerns here are currently very speculative.</p></li></ol><p>In short, rich-world governments should spend more on pandemics, and we should look to convince private companies that reducing biological threats is in their best interest. I am very interested in ideas for one-off investments that could improve preparedness, but think the case for these is easy to overstate. Low-cost ways to mitigate increasing risks are also worth pursuing. More research into the likelihood of long-term effects of pandemics, and cost-effectiveness analyses of reducing this risk, would help analyse this space.</p><p>The next two sections give my reasoning behind changing risks and long-term threats, probably the most controversial of my views. The final section changes tack to consider the weaknesses in our response tools.</p><h1>Changing risks</h1><p>Humanity experienced, on average, 0.3 pandemics per year across the 17th and 18th centuries, 0.5 per year in the 100 years up until the end of the Second World War, but only 0.1 per year in the 79 years since.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> Any argument that the rate of pandemics is increasing must answer why, empirically, this risk has recently been below the historic rate.&nbsp;</p><p>Alternatively, one could argue that the risk could be increasing because the severity of each pandemic is increasing. I am not aware of any evidence for this. My prior is that we should be able to mitigate pandemics more effectively in the future. COVID-19 has taught us a lot such as: the effectiveness of lockdowns and how to deploy vaccines more rapidly than we ever have done in the past. There are further reasons to be more optimistic. To name a few: the 100 Days Mission (supported by the G7) to go even faster on vaccines, therapeutics, and diagnostics; new tools, including those utilising machine learning, to speed drug discovery; and technologies that will make home-working even easier making lockdowns less costly (e.g.: virtual reality or self-driving cars).</p><p>The argument that zoonotic pandemic risks are rising is weak. Much of the evidence for increasing animal-to-human disease transmission is confounded by better global diagnostics and only considers the post-war period.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> There are reasonable mechanisms for increasing risk, such as deforestation leading to more human/animal interaction. But I am yet to see anything to update away from my view that pandemics are rarer than they have been historically.</p><p>Labs handling the most dangerous pathogens (BSL-4 labs) are <a href="https://www.kcl.ac.uk/warstudies/assets/global-biolabs-report-2023.pdf#page=5">increasing in number</a>. Based on current trends, their risk will equal the historical risk from zoonotic diseases in the 2030s.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> Therefore, while these may change the picture, they do not change the conclusions on cost-effectiveness, which need order-of-magnitude changes. While these pandemics might be in the more severe end of historic ones, the distribution of severity is probably not much greater.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a></p><p>The final class of risk, and perhaps the most uncertain, is deliberate attacks. State actors, which have had and continue with bioweapons programmes, have the greatest potential. However, they seem likely limited by the indiscriminate nature of human-to-human transmissible bioweapons. This seems unlikely to change, although machine learning models allowing these states to discover more dangerous pathogens are perhaps threatening.</p><p>The more changeable class is terrorists and other non-state groups. Historically, they have been unable to use bioweapons.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> However, advancements in dual-use technologies could overcome this. For example more widespread access to DNA synthesis or more accessible access to information (e.g.: increased use of open source publishing or large language models such as ChatGPT functioning as "search engines on steroids"). Further research into these risks, including engagement between the scientific and intelligence/counter-terrorism communities, would be helpful to better assess these risks. There&#8217;s plausibly some cheap and helpful interventions here, such as <a href="https://www.nti.org/about/programs-projects/project/preventing-the-misuse-of-dna-synthesis-technology/">DNA synthesis screening</a>.</p><h1>Long-term effects</h1><p><a href="https://www.gcsp.ch/publications/securing-civilisation-against-catastrophic-pandemics">Gopal et al. (2023)</a> argues that biological threats pose a threat to the long-term future of humanity due to causing societal collapse. They propose two scenarios whereby this could arise: &#8220;wildfire&#8221; pandemics that are so frightening that enough essential workers stay home, and &#8220;stealth&#8221; pandemics that infect such a large fraction of the global population before detection that we cannot respond to them.</p><p>A wildfire pandemic scenario is a disease spreading quickly to such an extent that even lockdowns cannot prevent their spread. Imagine early COVID-19 but more lethal or spreading several times as fast. Eventually, enough essential workers become infected or refuse to work (fearing for their lives), that society breaks down. This seems incredibly unlikely to me. First, such a disease would be far out-of-distribution of anything we have seen previously, combining the worst elements of a variety of pathogens. For a disease to continue growing exponentially in a lockdown (which could be stronger than those in COVID-19) would make it one of the most infectious diseases we have ever seen.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> Some pathogens (e.g.: influenza), that are very well-adapted to humans, have never managed this. Second, even if this does occur, it is unclear if it would lead to societal collapse. While it is hard to do much except speculate here, my intuition is that this is a very high bar. Humans, especially those fearing for their lives, are ingenious. We would likely find more efficient ways of operating society, needing fewer essential workers. Finally, the idea that essential workers would stay at home while society breaks down around them is implausible to me. I would welcome evidence to change my mind here, but that case has not been made.</p><p>Stealth pandemics do seem very scary. Their biological plausibility is highly uncertain, as is the ability to engineer these pathogens in the near or medium term. This is compounded because I think such a pathogen needs to be more severe than Gopal et al. suggest.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> A stealth pandemic needs to combine the worst elements of several pathogens that humans have ever faced, making it far out-of-distribution compared to anything we have seen. I am extremely uncertain here, have not seen arguments either way, and do not have any expertise.</p><p>There are plausibly some low-cost interventions to greatly increase our probability of detecting a stealth pandemic before it infects a significant fraction of the population. For example, metagenomic testing in easy-to-access or high-risk populations (e.g.: healthcare workers, blood or respiratory samples taken for other purposes, or travellers). If metagenomic sequencing became cheap and useful enough to justify for clinical reasons, this data would likely be enough. However, what to do following detection remains an unanswered question. Until these questions are answered, tractability on this issue remains low.</p><p>I am <a href="https://ineffectivealtruismblog.com/category/exaggerating-the-risks/biorisk/">not the first to point out</a> that the arguments that biological risks have a reasonable chance of causing long-term harm to humans are weak. While these threats should be considered, I want to see quantitative cost-effectiveness before we redirect significant resources on this rationale.</p><h1>Pandemic response</h1><p>A major constraint on pandemic response is that our response tools are blunt. This means that taking a precautionary approach and responding early is expensive. Finding ways to better calibrate our response should be a high priority.</p><p>Restrictions on social activity, such as closing venues, are the fastest way we know of to stop a pathogen<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a> spreading; the most extreme version of this being a lockdown. These are expensive to implement from many perspectives, including economics and mental health. They also deprive individuals of their liberties, which is morally questionable. Arguably, this is because of their indiscriminate nature: everyone must stop activity regardless of their personal level of risk. We should look to find lower cost measures.</p><p>The most obvious version of granular measures are isolating only the individuals most likely to be infected. Contact tracing is the normal implementation of this, yet performed poorly in COVID-19. Either the criteria for tracing was spread broad (negating much of its use), or it had only marginal effects. The most promising paths for improvements here are rapid diagnostics or automated contact tracing. Both showed promise during the pandemic,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a> and could be more impactful with better preparation.</p><p>Passive measures could also play an important part here. If we reduce the ability for a pathogen to spread, then we can mitigate pandemics without any restrictions on anyone&#8217;s lives. Promising avenues here are improving indoor air quality either through better ventilation and filtration, or germicidal ultraviolet light.</p><p>Another avenue to pursue is to improve our ability to calibrate responses early in a pandemic. The large data and model uncertainty<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a> means that true estimates of our uncertainty are extremely large in an outbreak&#8217;s early phase. Yet, the quicker we can characterise the likely severity of an outbreak the more quickly we can respond appropriately. Numerous academic groups are exploring ways to enhance our response. Incremental progress across areas is likely our best hope.</p><p>Combined, the above suggestions will massively improve our response to outbreaks, before they become pandemics. Better knowledge will inform a response, which itself can be stepped up or down in a more granular way.</p><p><em>Thank you</em> <em>for</em> <em>reading to the end. <strong>I am currently looking for a job!</strong> If you think your organisation could benefit from this type of thinking, please <a href="mailto:joshbblake@gmail.com">get in touch</a>.</em></p><p><em>These thoughts are all my own, informed by discussions with a wide variety of people. I am particularly grateful to the Biosecurity Working Group based at the <a href="https://www.meridian-office.org/">Meridian Office</a> in Cambridge for both these discussions and comments on drafts. Lin Bowker-Lonnecker, and James Lester both provided helpful and thought-provoking feedback. My views have been heavily informed by my research and experience providing scientific advice to the government about the epidemiology of COVID-19.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.joshuablake.co.uk/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Deconfusion device! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>By direct I mean paying for defences, as opposed to lobbying or other efforts that can generate leverage.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>This is based on the dataset from <a href="https://www.pnas.org/doi/abs/10.1073/pnas.2105482118">Marani et al. (2021)</a>. My preliminary reanalysis of their data suggests pandemics (killing at least 1 in 100,000 of the global population) occur with this frequency and severity. <a href="https://www.cgdev.org/publication/estimated-future-mortality-pathogens-epidemic-and-pandemic-potential">A recent modelling effort published by the Centre for Global Development</a> (<a href="https://www.cgdev.org/blog/how-big-risk-epidemics-really">blogpost summary</a>) implies higher numbers by a factor of around 2. Unfortunately, the methodology in that paper is not detailed enough to reconcile the differences easily.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>For example, CEPI, with <a href="https://100days.cepi.net/wp-content/uploads/2023/10/2023_10_11-CEPI-Investors-Overview.pdf">a budget of $300m per year</a>, aims to provide a vaccine within 100 days of an outbreak. It seems likely this would reduce pandemic deaths by more than 4%, passing this bar.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p><a href="https://ourworldindata.org/causes-of-death">Deaths from seasonal and endemic respiratory illnesses</a> are comparable to the pandemic deaths I give here. <a href="https://hearthisidea.com/episodes/bruns/#health-impacts-of-particulate-matter-vs-pathogens">Indoor pollutants cause similar harm to indoor pathogen spread</a>, and filtration/ventilation can reduce both harms.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>As footnote 2.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>The most prominent papers here are <a href="https://www.nature.com/articles/nature06536">Jones et al. (2008)</a> and <a href="https://www.nature.com/articles/nature06536">Allen et al. (2017)</a>. <a href="https://gh.bmj.com/content/8/11/e012026#T1">Meadows et al. (2023)</a> appears more convincing but their results (figure 2) seem somewhat overfit.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>The rate of <a href="https://www.kcl.ac.uk/warstudies/assets/global-biolabs-report-2023.pdf#page=5">BSL-4 labs being built</a>, <a href="https://f1000research.com/articles/10-752">reported lab accidents</a>, and virological papers published all are growing at roughly the same rate. There have been one or two pandemics caused accidentally by humans (1977 Russian flu and possibly COVID-19). Taking this growth rate in a <a href="https://joshuablake.co.uk/blog/gamma-poisson/">gamma-Poisson model</a> gives the conclusion that the risk surpasses the historical pandemic risk between 2032 and 2042 depending on assumptions.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Most labs working with viruses are working with ones similar to what we see in nature. For example, one of the leading hypotheses for the 1977 Russian Flu pandemic is that they were trying to develop a vaccine against the strain and it went wrong. Furthermore, any viruses in labs (by definition) are under active study. This means we should be better prepared for them. However, labs are likely to focus on the more concerning pathogens (because these are of greatest public health interest), and sometimes even make pathogens more dangerous. Such labs should come under greater scrutiny.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>Possibly the closest to succeeding were the doomsday cult <a href="https://en.wikipedia.org/wiki/Aum_Shinrikyo#Tokyo_subway_sarin_attack_and_related_incidents">Aum Shinrikyo</a> in deploying anthrax. Other attempted terrorists (e.g.: the <a href="https://en.wikipedia.org/wiki/2001_anthrax_attacks">Anthrax letters</a>) did not want to cause societal collapse or human extinction. However, they <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3322761/">made several technical mistakes</a> leading to their attempt failing. <a href="https://www.tandfonline.com/doi/full/10.1080/1057610X.2022.2034852">No terrorist organisation is known to have attempted or been interested in an attack using an infectious disease.</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>The first UK lockdown reduced R0 by roughly 80% <a href="https://www.sciencedirect.com/science/article/pii/S1755436522000482">(Eales et al., 2022)</a>. If the situation required it to prevent society starving, I think this could be much more effective (e.g.: reducing the remaining contacts or widespread effective masking among essential workers), reducing the riskiness of the remaining contacts by 2-10x. This gives the potential for a R0 reduction of 90-98%. This controls anything with a pre-intervention R0 of less than 10-50.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>Gopal et al. claim, similar to the wildfire scenario, that a majority of essential workers need to be deliberated or killed to cause societal collapse; it is unclear what their sources are to believe this. However, <a href="https://theprecipice.com/">Ord (2020)</a>, argues that at least 50% of humans in every region need to be killed, and that plausibly as few as 98 survivors could restart civilisation. The stealth nature of the pandemic means that panic or people staying home to protect themselves is less likely.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>Assuming a respiratory pathogen, the most likely to cause a pandemic.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>Daily testing of contacts (or other individuals likely to have COVID-19) has shown promise in both <a href="https://www.gov.uk/government/publications/spi-m-o-statement-on-daily-contact-testing-3-march-2021">modelling</a> and <a href="https://www.sciencedirect.com/science/article/pii/S0140673621019085?via%3Dihub#bib5">randomised controlled trials</a>.Evaluations of contact tracing apps <a href="https://www.nature.com/articles/d41586-023-02130-6">show effectiveness</a>, but uptake remains a problem.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>Data uncertainty means that the data does not say what we think it says (e.g.: due to ascertainment or other selection biases). Model uncertainty means that the choice of epidemiological model to use is uncertain. Neither of these types of uncertainty are normally captured in traditional scientific measures of uncertainty (e.g.: confidence intervals).</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Topping the Metaculus beginner tournament]]></title><description><![CDATA[Metaculus, a community-based platform known for its prediction competitions, recently hosted a beginner forecasting tournament. If you&#8217;re new to this, forecasting tournaments involve making predictions on a wide range of topics, with the accuracy of those predictions determining the winners. My involvement was rather successful, simply by applying the basic principles of forecasting. In fact, I found myself, somewhat surprisingly, at the top of the leaderboard, despite my dwindling activity in the competition&#8217;s closing weeks. However, I did not officially win because I had a Metaculus account from before this year (previously,]]></description><link>https://blog.joshuablake.co.uk/p/metaculus-beginner-tournament</link><guid isPermaLink="false">https://blog.joshuablake.co.uk/p/metaculus-beginner-tournament</guid><dc:creator><![CDATA[Joshua Blake]]></dc:creator><pubDate>Sat, 29 Jul 2023 00:00:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d2d70651-faf6-4bcb-b733-264761c5bffd_241x142.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Metaculus, a community-based platform known for its prediction competitions, recently hosted a beginner forecasting tournament. If you&#8217;re new to this, forecasting tournaments involve making predictions on a wide range of topics, with the accuracy of those predictions determining the winners. My involvement was rather successful, simply by applying the basic principles of forecasting. In fact, I found myself, somewhat surprisingly, at the top of the leaderboard, despite my dwindling activity in the competition&#8217;s closing weeks. However, I did not officially win because I had a Metaculus account from before this year (previously, <a href="https://www.metaculus.com/questions/15087/metaculus-beginners-on-points-leaderboard/">Metaculus only required level 5 or below</a>, which I was at the start of the competition). My basic strategy optimised performance for time spent: I spent 30 to 60 minutes on each question to set a base rate estimate for the majority of questions and rarely revisited them.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!n6sm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3f6138b-1146-473a-a944-e386fdaaadf6_1151x208.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!n6sm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3f6138b-1146-473a-a944-e386fdaaadf6_1151x208.png 424w, https://substackcdn.com/image/fetch/$s_!n6sm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3f6138b-1146-473a-a944-e386fdaaadf6_1151x208.png 848w, https://substackcdn.com/image/fetch/$s_!n6sm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3f6138b-1146-473a-a944-e386fdaaadf6_1151x208.png 1272w, https://substackcdn.com/image/fetch/$s_!n6sm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3f6138b-1146-473a-a944-e386fdaaadf6_1151x208.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!n6sm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3f6138b-1146-473a-a944-e386fdaaadf6_1151x208.png" width="1151" height="208" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b3f6138b-1146-473a-a944-e386fdaaadf6_1151x208.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:208,&quot;width&quot;:1151,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:32710,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!n6sm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3f6138b-1146-473a-a944-e386fdaaadf6_1151x208.png 424w, https://substackcdn.com/image/fetch/$s_!n6sm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3f6138b-1146-473a-a944-e386fdaaadf6_1151x208.png 848w, https://substackcdn.com/image/fetch/$s_!n6sm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3f6138b-1146-473a-a944-e386fdaaadf6_1151x208.png 1272w, https://substackcdn.com/image/fetch/$s_!n6sm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3f6138b-1146-473a-a944-e386fdaaadf6_1151x208.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>As for my performance, it varied. I excelled in two questions, underperformed on one, and maintained an average run on the rest.</p><h2>My overall record</h2><p>According to my binary question calibration, I demonstrated well-calibrated predictions, including all I&#8217;ve estimated on Metaculus. Metaculus reported my underconfidence by 6% (I&#8217;m not quite sure what this metric means). Assessing the data based on the graph is challenging due to the narrow bin size. Nevertheless, the &#8220;well-calibrated&#8221; line falls within the 50% confidence intervals for nearly all bins, suggesting my calibration was not off the mark.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9DqQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7036f32a-40ac-4c30-9c83-5cc07de43311_1108x501.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9DqQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7036f32a-40ac-4c30-9c83-5cc07de43311_1108x501.png 424w, https://substackcdn.com/image/fetch/$s_!9DqQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7036f32a-40ac-4c30-9c83-5cc07de43311_1108x501.png 848w, https://substackcdn.com/image/fetch/$s_!9DqQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7036f32a-40ac-4c30-9c83-5cc07de43311_1108x501.png 1272w, https://substackcdn.com/image/fetch/$s_!9DqQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7036f32a-40ac-4c30-9c83-5cc07de43311_1108x501.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9DqQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7036f32a-40ac-4c30-9c83-5cc07de43311_1108x501.png" width="1108" height="501" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7036f32a-40ac-4c30-9c83-5cc07de43311_1108x501.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:501,&quot;width&quot;:1108,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:46541,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9DqQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7036f32a-40ac-4c30-9c83-5cc07de43311_1108x501.png 424w, https://substackcdn.com/image/fetch/$s_!9DqQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7036f32a-40ac-4c30-9c83-5cc07de43311_1108x501.png 848w, https://substackcdn.com/image/fetch/$s_!9DqQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7036f32a-40ac-4c30-9c83-5cc07de43311_1108x501.png 1272w, https://substackcdn.com/image/fetch/$s_!9DqQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7036f32a-40ac-4c30-9c83-5cc07de43311_1108x501.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>My continuous predictions look underconfident: there&#8217;s too few questions that resolve in the tails of my distribution, which means that I&#8217;m placing too much probability mass there. This is a very low sample size though, so I probably shouldn&#8217;t take too much from it</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vcCy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ad36c4b-2388-4079-9bfd-e63cd48136a7_1099x550.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vcCy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ad36c4b-2388-4079-9bfd-e63cd48136a7_1099x550.png 424w, https://substackcdn.com/image/fetch/$s_!vcCy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ad36c4b-2388-4079-9bfd-e63cd48136a7_1099x550.png 848w, https://substackcdn.com/image/fetch/$s_!vcCy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ad36c4b-2388-4079-9bfd-e63cd48136a7_1099x550.png 1272w, https://substackcdn.com/image/fetch/$s_!vcCy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ad36c4b-2388-4079-9bfd-e63cd48136a7_1099x550.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vcCy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ad36c4b-2388-4079-9bfd-e63cd48136a7_1099x550.png" width="1099" height="550" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6ad36c4b-2388-4079-9bfd-e63cd48136a7_1099x550.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:550,&quot;width&quot;:1099,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:38832,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!vcCy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ad36c4b-2388-4079-9bfd-e63cd48136a7_1099x550.png 424w, https://substackcdn.com/image/fetch/$s_!vcCy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ad36c4b-2388-4079-9bfd-e63cd48136a7_1099x550.png 848w, https://substackcdn.com/image/fetch/$s_!vcCy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ad36c4b-2388-4079-9bfd-e63cd48136a7_1099x550.png 1272w, https://substackcdn.com/image/fetch/$s_!vcCy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ad36c4b-2388-4079-9bfd-e63cd48136a7_1099x550.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>!</p><h2>Assessing the stand-out questions</h2><p>Two questions I did very well on. For these questions, my edge was simply believing the base rate and not over-updating based on dodgy reporting or the questions framing.</p><p>The first question was <a href="https://www.metaculus.com/questions/15854/how-many-2023-tech-layoffs-by-april-25th/">forecasting the number of lay-offs in the tech sector between 11th and 25th April.</a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LPHO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb136002-a6d3-4e9e-bfa7-0f5056321545_1120x290.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LPHO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb136002-a6d3-4e9e-bfa7-0f5056321545_1120x290.png 424w, https://substackcdn.com/image/fetch/$s_!LPHO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb136002-a6d3-4e9e-bfa7-0f5056321545_1120x290.png 848w, https://substackcdn.com/image/fetch/$s_!LPHO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb136002-a6d3-4e9e-bfa7-0f5056321545_1120x290.png 1272w, https://substackcdn.com/image/fetch/$s_!LPHO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb136002-a6d3-4e9e-bfa7-0f5056321545_1120x290.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LPHO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb136002-a6d3-4e9e-bfa7-0f5056321545_1120x290.png" width="1120" height="290" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cb136002-a6d3-4e9e-bfa7-0f5056321545_1120x290.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:290,&quot;width&quot;:1120,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:38627,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LPHO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb136002-a6d3-4e9e-bfa7-0f5056321545_1120x290.png 424w, https://substackcdn.com/image/fetch/$s_!LPHO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb136002-a6d3-4e9e-bfa7-0f5056321545_1120x290.png 848w, https://substackcdn.com/image/fetch/$s_!LPHO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb136002-a6d3-4e9e-bfa7-0f5056321545_1120x290.png 1272w, https://substackcdn.com/image/fetch/$s_!LPHO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb136002-a6d3-4e9e-bfa7-0f5056321545_1120x290.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Two weeks of lay-offs, based on historical averages, would have led to a number very close to the top end of the range you can predict, and a significant chance that it would be above the end of the range. However, these announcements are very clustered: the numbers are dominated by a few rare events, which make the uncertainty high. I&#8217;m not really sure why so few forecasters paid attention to this, but it provided me with lots of points so I&#8217;m not complaining!</p><p>The second question was <a href="https://www.metaculus.com/questions/16529/king-charles-iii-coronation-medals/">forecasting the number of coronation medals would be awarded by King Charles III</a>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!v890!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aa71733-7a20-4274-ac20-b9b1ceef34fb_1119x287.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!v890!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aa71733-7a20-4274-ac20-b9b1ceef34fb_1119x287.png 424w, https://substackcdn.com/image/fetch/$s_!v890!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aa71733-7a20-4274-ac20-b9b1ceef34fb_1119x287.png 848w, https://substackcdn.com/image/fetch/$s_!v890!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aa71733-7a20-4274-ac20-b9b1ceef34fb_1119x287.png 1272w, https://substackcdn.com/image/fetch/$s_!v890!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aa71733-7a20-4274-ac20-b9b1ceef34fb_1119x287.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!v890!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aa71733-7a20-4274-ac20-b9b1ceef34fb_1119x287.png" width="1119" height="287" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4aa71733-7a20-4274-ac20-b9b1ceef34fb_1119x287.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:287,&quot;width&quot;:1119,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:35024,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!v890!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aa71733-7a20-4274-ac20-b9b1ceef34fb_1119x287.png 424w, https://substackcdn.com/image/fetch/$s_!v890!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aa71733-7a20-4274-ac20-b9b1ceef34fb_1119x287.png 848w, https://substackcdn.com/image/fetch/$s_!v890!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aa71733-7a20-4274-ac20-b9b1ceef34fb_1119x287.png 1272w, https://substackcdn.com/image/fetch/$s_!v890!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aa71733-7a20-4274-ac20-b9b1ceef34fb_1119x287.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>There were a bunch of newspaper reports that the numbers would be low, and that King Charles was trying to save money. My take was two-fold: it would roughly be in-line with similar recent awards (e.g.: <a href="https://en.wikipedia.org/wiki/Queen_Elizabeth_II_Platinum_Jubilee_Medal">400,000 issued for Queen Elizabeth II&#8217;s platinum jubilee in 2022</a>) and very likely to cover all military personnel in the UK, if not the Commonwealth, although I did allow for a 10% chance that circulating rumours in the press of 10,000 could be correct. The final number was 400,000, right on these estimates.</p><p>My worst question, by far, was <a href="https://www.metaculus.com/questions/17089/nyc-shelter-population-on-52223/">predicting the reported New York City homeless population on May 22nd 2023</a>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!iSKh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd82f4d6-1179-443d-bcff-aac9ee33d6bf_1117x280.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!iSKh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd82f4d6-1179-443d-bcff-aac9ee33d6bf_1117x280.png 424w, https://substackcdn.com/image/fetch/$s_!iSKh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd82f4d6-1179-443d-bcff-aac9ee33d6bf_1117x280.png 848w, https://substackcdn.com/image/fetch/$s_!iSKh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd82f4d6-1179-443d-bcff-aac9ee33d6bf_1117x280.png 1272w, https://substackcdn.com/image/fetch/$s_!iSKh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd82f4d6-1179-443d-bcff-aac9ee33d6bf_1117x280.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!iSKh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd82f4d6-1179-443d-bcff-aac9ee33d6bf_1117x280.png" width="1117" height="280" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bd82f4d6-1179-443d-bcff-aac9ee33d6bf_1117x280.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:280,&quot;width&quot;:1117,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:39474,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!iSKh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd82f4d6-1179-443d-bcff-aac9ee33d6bf_1117x280.png 424w, https://substackcdn.com/image/fetch/$s_!iSKh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd82f4d6-1179-443d-bcff-aac9ee33d6bf_1117x280.png 848w, https://substackcdn.com/image/fetch/$s_!iSKh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd82f4d6-1179-443d-bcff-aac9ee33d6bf_1117x280.png 1272w, https://substackcdn.com/image/fetch/$s_!iSKh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd82f4d6-1179-443d-bcff-aac9ee33d6bf_1117x280.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Here, I think my issue was simply not updating. I forecasted about a week out, when there was some policy changes going through the system. These changes&#8217; effects were probably clear a few days later. If I had updated my forecast, I could have reduced the uncertainty and performed much better.</p><h2>Conclusion</h2><p><em>This section courtesy of ChatGPT. The tone is&#8230; Interesting&#8230;</em></p><p>While this post doesn&#8217;t offer any revolutionary insights, it serves to highlight the importance of following well-established forecasting principles in a tournament setting, like Metaculus&#8217;s beginner tournament. The experience underscored the value of respecting base rates, regularly revising predictions, and acknowledging the fair criteria set by the organisers. This journey may not have ended with a formal victory, but the learning outcomes - embracing both our hits and misses as part of the forecasting process - make it a triumph in its own right. Every forecasting challenge is an opportunity for growth and this tournament has been no different.</p>]]></content:encoded></item><item><title><![CDATA[My doubts about longtermism]]></title><description><![CDATA[&#8220;Longtermism is the view that we should be doing much more to protect future generations.&#8221; The philosophy has gained a large prominence within Effective Altruism (EA)1, and now appears to consume most of the discussion and energy within the movement, even if not most of the money.]]></description><link>https://blog.joshuablake.co.uk/p/longtermism-doubts</link><guid isPermaLink="false">https://blog.joshuablake.co.uk/p/longtermism-doubts</guid><dc:creator><![CDATA[Joshua Blake]]></dc:creator><pubDate>Sat, 03 Jun 2023 00:00:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/cbae15da-e371-4251-a322-300e4194a0d7_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a href="https://www.williammacaskill.com/longtermism">&#8220;Longtermism is the view that we should be doing much more to protect future generations.&#8221;</a> The philosophy has gained a large prominence within Effective Altruism (EA),<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> and now appears to consume most of the discussion and energy within the movement, even if not most of the money.</p><ul><li><p>Will MacAskill, arguably the biggest proponent of longtermism, <a href="https://twitter.com/willmacaskill/status/1520107730626785280?lang=en">summarises</a> the argument for it as:</p></li></ul><ol><li><p>Future people count.</p></li><li><p>There could be a lot of them.</p></li><li><p>We can make their lives go better.</p></li></ol><p>On the face of it, this is a convincing argument.</p><p>However, this post outlines my objections to it, summarised as:</p><ol><li><p>Future people count, but less than present people.</p></li><li><p>There might not be that many future people.</p></li><li><p>We might not be able to help future people much.</p></li></ol><p>To this, I will add a fourth: there are trade-offs from this work.</p><h2>Disclaimer</h2><p>I am not a philosopher, and don&#8217;t know much about philosophy (I am trying to learn). This is my best thinking on longtermism, and why my credence is not high enough for it to be the major factor determining my career choices (as 80,000 hours advocates).</p><p>I should probably highlight I&#8217;ve not read What We Owe the Future,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> but having talked to many people I don&#8217;t think there is much in there that would change my mind.</p><p>I would <strong>love</strong> to be told why my musings here are wrong, please <a href="https://twitter.com/JoshuaBlake_/">Tweet</a> or <a href="mailto:joshbblake@gmail.com">email</a> me!</p><h2>Future people count, but less than present people</h2><p>The statement &#8220;I have a stronger ethical obligation to my immediate family more than a stranger&#8221; is deeply intuitive to me, and I would suggest most people; full impartiality is an ethically controversial view. My ethical intuition points towards something like a network.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> The more connections to an individual, the stronger the ethical obligations. In general, you&#8217;ll be reasonably connected to those alive today (<a href="https://en.wikipedia.org/wiki/Small-world_experiment#Current_research_on_the_small-world_problem">even those geographically far</a>) but as you span across time these connections will get weaker.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> Therefore, we should somewhat discount the effects on people far into the future.</p><p>On the other hand, exponential discounting seems pretty wrong. The classic example is that <a href="https://80000hours.org/podcast/episodes/why-the-long-run-future-matters-more-than-anything-else-and-what-we-should-do-about-it/">even a 1% rate of discounting leads to Tutankhamun considering his own welfare as more important than the total of all humans alive today</a>. This also seems implausible.</p><p>I think there&#8217;s some middle ground here: we should not count our contributions to future people&#8217;s wellbeing as equal to that of people alive today, but we shouldn&#8217;t discount as strongly as exponential. Personally, something roughly logarithmic makes sense.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><h2>There might not be that many people</h2><p>Most longtermists believe that we face a high probability of extinction over the next century. <a href="https://globalprioritiesinstitute.org/wp-content/uploads/David-Thorstad-Existential-risk-pessimism-.pdf">As David Thorstad has recently pointed out</a>, assumptions over the baseline rate of risk are very influential on how much longer you expect humanity to last for given a certain reduction in extinction risk. The basic conclusion is that, assuming you think extinction risk is currently high, standard longtermist estimates of future numbers of people are several order of magnitude too high, and longtermist interventions <a href="https://ineffectivealtruismblog.com/2023/02/04/existential-risk-pessimism-and-the-time-of-perils-part-7-an-application/">no longer look obviously more cost-effective than simpler interventions such as bed nets</a>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> Be wary here though, how risk changes is <a href="https://twitter.com/JoshuaBlake_/status/1638546433367212032">weird, non-linear, and non-intuitive</a>.</p><p>Furthermore, there&#8217;s often an (implicit or explicit) assumption that (assuming we do not go extinct) we will have vastly more humans across the stars and be vastly happier. When I look at the suffering in the world today, I do not find the claim that humans will be much happier in the future compelling. I ask what is bigger: the gap between the billions in absolute poverty today, and those of us in the rich world; or the gap between the current rich world and the happiest possible humans? The answer seems pretty uncertain to me, but my intuition is that the former gap is probably bigger.</p><h2>We might not be able to help future people much</h2><p>The idea that we can take actions that will predictably and significantly improve the lives of people centuries or millennia in the future is quite bold on the face of it. <a href="https://en.wikipedia.org/wiki/Unintended_consequences#Perverse_consequences_of_environmental_intervention">We struggle to predict the consequences of our actions even decades into the future.</a> Well-meaning global health interventions that seem convincing to funders <a href="https://en.wikipedia.org/wiki/Roundabout_PlayPump">can be a bad idea</a>. Early EA thinking is heavily intertwined with GiveWell, who systematically seek out high-quality evidence to combat this. For longtermism, rarely is any of this possible.</p><p>For these reasons, among others, longtermists tend to focus on <a href="https://futureoflife.org/existential-risk/existential-risk/">existential risks</a>: those that would cause human extinction, or affect humans so greatly that we could not return to our current standard of being.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> This seems plausible, and preventing human extinction is unambiguously good.</p><p>Yet, most well-studied risks do not seem to be likely enough for us to be that concerned about them (e.g.: meteor strikes). Therefore, the focus turns to less well-understood risks, most prominently the risks from artificial general intelligence or yet-to-be-invented biological weapons. This process can be referred to as <a href="https://ineffectivealtruismblog.com/2023/02/18/academics-review-what-we-owe-the-future-part-3-rini-on-demandingness-cluelessness-and-inscrutability/">regression to the inscrutable</a>: the arguments become less empirical and more based on intuitions or unverifiable claims, and hence near-impossible to argue against.</p><h2>Remember the trade-offs</h2><p>For me, the crucial insight of EA is that we have limited resources (time and money) that we should try and use to do as much good as possible. This requires considering the relative value of different actions we can take. Longtermism commits us to using these resources to help far-off humans, necessarily at the cost of those alive today.</p><p>With the simple argument for longtermism, the value of the future is extraordinarily high (say 10<sup>11</sup> humans), which means that you can easily justify reducing the risk of extinction on cost-effectiveness grounds using standard metrics. This is true even with not very high credence in longtermism, a lot of uncertainty over the actions you take, and more. Yet, once you start discounting future humans and/or doubting the number of them, these considerations matter a lot.</p><h2>Conclusion</h2><p>Longtermism is superficially compelling to me. However, the longer I think about it, the more objections I find. None of these are on its own insurmountable for the theory, however, combined, they mean I am hesitant to use it as the primary ethical theory guiding my actions.</p><p>EA generally approves of <a href="https://slatestarcodex.com/2013/05/02/if-its-worth-doing-its-worth-doing-with-made-up-statistics/">reasoning with best guesses</a>, I agree (at least, you should consider them). That principle should be extended to longtermist interventions, and those advocating for large amounts of resources to go there should make their calculations clear.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Why exactly this is isn&#8217;t clear to me. It might be a founder effect, in that MacAskill was very involved in early EA and then went on to be majorly involved in longtermism (possibly because this all happened in the University of Oxford philosophy department). It might be that EA skews utilitarianism, which somewhat naturally leads to longtermism. It might be that MacAskill and others involved in early EA were convinced by longtermist arguments. This would be interesting for someone to dig into.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>I&#8217;m bad at both starting and finishing books. I tend to overthink starting them (because I read so few that it feels like a big decision), and don&#8217;t finish them. I&#8217;m not sure why, because I read a lot of shorter articles, Tweets, etc. My best guess is that it feels like a big commitment to pick up a book so I need a large block of time. Any suggestions on reading more books and less Twitter would be welcome.&nbsp;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Strictly, a weighted network such that some connections are more important than others. Then your obligation to someone is a monotonically decreasing function over the shortest distance to them.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>This argument is based on distance to an individual, not absolute time: I claim your ethical obligation to people in the past is also less than to those in the future. In particular, this is agent-relative: I&#8217;m not claiming that future or past people have less worth intrinsically. Rather, that our moral obligations to them are less.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>My initial thought is that discounting <em>t</em> years from now something like</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6Wk4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef9e94c7-7e91-44c1-b609-6bc945969e8d_208x40.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6Wk4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef9e94c7-7e91-44c1-b609-6bc945969e8d_208x40.png 424w, https://substackcdn.com/image/fetch/$s_!6Wk4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef9e94c7-7e91-44c1-b609-6bc945969e8d_208x40.png 848w, https://substackcdn.com/image/fetch/$s_!6Wk4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef9e94c7-7e91-44c1-b609-6bc945969e8d_208x40.png 1272w, https://substackcdn.com/image/fetch/$s_!6Wk4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef9e94c7-7e91-44c1-b609-6bc945969e8d_208x40.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6Wk4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef9e94c7-7e91-44c1-b609-6bc945969e8d_208x40.png" width="194" height="37.30769230769231" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ef9e94c7-7e91-44c1-b609-6bc945969e8d_208x40.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:40,&quot;width&quot;:208,&quot;resizeWidth&quot;:194,&quot;bytes&quot;:2623,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6Wk4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef9e94c7-7e91-44c1-b609-6bc945969e8d_208x40.png 424w, https://substackcdn.com/image/fetch/$s_!6Wk4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef9e94c7-7e91-44c1-b609-6bc945969e8d_208x40.png 848w, https://substackcdn.com/image/fetch/$s_!6Wk4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef9e94c7-7e91-44c1-b609-6bc945969e8d_208x40.png 1272w, https://substackcdn.com/image/fetch/$s_!6Wk4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef9e94c7-7e91-44c1-b609-6bc945969e8d_208x40.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>seems reasonable.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>It is possible to rescue longtermism with the Time of Perils hypothesis: that we live in a uniquely perilous time, and soon we will see dramatic reduction in extinction risk. Yet this would be extremely surprising (we just happen to be alive at this time). <a href="https://forum.effectivealtruism.org/posts/N6hcw8CxK7D3FCD5v/existential-risk-pessimism-and-the-time-of-perils-4?commentId=AASmenGnzBhvWQKLy">Here&#8217;s some discussion on that.</a>&nbsp;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>What exactly is classed as an x-risk isn&#8217;t clear, nor consistent between authors. Extinction, enslavement of humans (to aliens, artificial intelligence, or some other beings), or permanent absence of civilisation are generally considered to qualify. Ord, in his book The Precipice, also includes anything large enough that the long-term potential of humanity is limited; this definition appears to depend heavily on the idea that humans would (by default) be extremely happier in the future and that removing this potential is of similar magnitude to extinction. I think this claim needs further justification.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Improve your forecasts of events: use the gamma-Poisson model]]></title><description><![CDATA[Forecasters often strive to predict the rate at which events occur1. However, traditional models used by forecasters, such as Laplace&#8217;s rule (based on the beta-binomial model) have their limitations. These models are sensitive to the choice of scale (e.g., whether time is measured in years or months) and do not accommodate multiple events occurring within a single time period.]]></description><link>https://blog.joshuablake.co.uk/p/gamma-poisson</link><guid isPermaLink="false">https://blog.joshuablake.co.uk/p/gamma-poisson</guid><dc:creator><![CDATA[Joshua Blake]]></dc:creator><pubDate>Sun, 23 Apr 2023 00:00:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4d75593a-20fc-4a1c-a960-44a90ce40ba1_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Forecasters often strive to predict the rate at which events occur.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> However, traditional models used by forecasters, such as Laplace&#8217;s rule (based on the beta-binomial model) have their limitations. These models are sensitive to the choice of scale (e.g., whether time is measured in years or months) and do not accommodate multiple events occurring within a single time period.</p><p>In 2022, Jamie Sevilla and Ege Erdil published a <a href="https://epochai.org/blog/a-time-invariant-version-of-laplace-s-rule">blog post</a> that proposed a more robust alternative: the gamma-Poisson model. This model is time- or scale-invariant, meaning that the choice of scale does not affect the results. In this post, we will delve deeper into the gamma-Poisson model, exploring its assumptions and how to fully utilise the posterior predictions, and finally recommending an alternative to Sevilla and Erdil&#8217;s suggested prior.</p><p>There are three main recommendations in this post.</p><ol><li><p>Consider the assumptions behind your model, particularly that the rate of events is constant and that the times between events are independent.</p></li><li><p>Use the full posterior, including both the uncertainty in the event rate and the inherent randomness in when events occur, when making forecasts.</p></li><li><p>Consider the Gamma(1/3, 0) prior when no prior information is available (recommended by <a href="https://projecteuclid.org/journals/electronic-journal-of-statistics/volume-5/issue-none/Neutral-noninformative-and-informative-conjugate-beta-and-gamma-prior-distributions/10.1214/11-EJS648.full">Kerman (2011)</a> as a &#8220;neutral prior&#8221;), rather than the Gamma(1, 0) recommended by Sevilla and Erdil.</p></li></ol><h2>The gamma-Poisson model</h2><p>The gamma-Poisson model is probably the simplest possible model for events occurring in continuous time. The gamma refers to the distribution for the rate of events, and the Poisson to the distribution for how events occur conditional on the rate of events. It requires only three assumptions:</p><ol><li><p>Our prior belief for the rate of events can be represented as a gamma distribution.</p></li><li><p>The rate does not change over the period of time we are analysing.</p></li><li><p>The events follow a homogeneous Poisson process, meaning that events are independent and the chance of future events is not affected by past events.</p></li></ol><p>The gamma distribution has two parameters: the shape (&#945;, the Greek letter alpha) and the rate (&#946;, the Greek letter beta). We write this distribution as Gamma(&#945;, &#946;). Both parameters must be greater than zero to ensure a proper distribution.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> When used as a prior distribution, the parameters have intuitive interpretations: &#945; represents the number of observed events, and &#946; denotes the length of the observation period. As long as we are consistent in our analysis, the choice of units for &#946; is irrelevant, ensuring that our model is time- or scale-invariant.</p><p>One convenient property of the gamma-Poisson model is that the posterior distribution for the rate of events will also be a gamma distribution.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> This allows us to easily write down our posterior beliefs. If our prior distribution is Gamma(&#945;, &#946;) and we observe <em>x </em>events over <em>T</em> time periods, our posterior becomes Gamma(&#945;+<em>x</em>, &#946;+<em>T</em>). A useful forecasting quantity is the probability that there are no events in some future period of length <em>t</em>, which is:</p><p></p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;\\left( \\frac{\\beta+T}{\\beta+T+t} \\right)^{\\alpha+x}&quot;,&quot;id&quot;:&quot;ASCTBAUKKJ&quot;}" data-component-name="LatexBlockToDOM"></div><h2>Using the posterior distribution for forecasting</h2><p>When forecasting from the gamma-Poisson model, there&#8217;s two reasons for our uncertainty. First, we are unsure of the underlying rate of events, represented by our posterior gamma distribution. Second, the process has some inherent randomness over when the events occur; that is, even if we knew the underlying rate, we still would not know how many events will occur in any period.</p><p>Often, a point estimate is taken for the first of these which understates our uncertainty. This section will explain how to take both into account for several circumstances of interest.</p><h3>The rate of events</h3><p>The gamma-Poisson model provides us with a posterior distribution for the rate of events: Gamma(&#945;+<em>x</em>, &#946;+<em>T</em>).<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> The mean of this distribution, (&#945;+<em>x</em>) / (<em>&#946;</em>+<em>T)</em>, represents our updated belief about the average rate of events occurring in a given time period.</p><p>If we want to forecast the number of events in the next <em>t</em> time periods, we also need to take into natural stochasticity in the process. Consider that, even if we knew the rate of events exactly, we would still have some uncertainty over how many events will occur. To take into account both our uncertainty over the rate of events and this stochasticity, we should use our posterior predictive distribution. Under the gamma-Poisson model, this is NegativeBinomial(&#945;+<em>x</em>, (<em>&#946;</em>+<em>T</em>)<em> </em>/ (<em>t</em>+<em>&#946;</em>+<em>T</em>)).<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> The mean of this distribution, <em>t</em>(&#945;+<em>x</em>) / (&#946;+<em>T</em>), represents our expected number of events in the next <em>t</em> time periods.</p><h3>Time between events</h3><p>The mean time between events is one over the rate of events (eg: if the rate is two per year, the mean time between events is half a year). Our posterior here is InverseGamma(&#945; + <em>x</em>, &#946; + <em>T</em>). The mean of this distribution is only defined if &#945; + <em>x </em>&gt; 1, which (if you follow my recommendations for setting a prior) occurs when you&#8217;ve observed one event. Intuitively, if we haven&#8217;t seen any events, then we have some belief that the event never occurs, and hence an infinite time between events. When the mean is defined, it is (&#946; + <em>T) / </em>(&#945; + <em>x </em>- 1).</p><p>Again, we get a posterior predictive distribution for the time until the next event. Here is is: Lomax(&#945; + <em>x</em>, &#946; + <em>T</em>), which has the same mean as the previous InverseGamma.</p><h3>Probability of no events</h3><p>Finally, we often want to know the probability that there are no events in a period of length <em>t</em>. This can be derived from either the negative binomial of lomax distributions above, in either case giving:</p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;\\left( \\frac{\\beta +T}{\\beta+T+t} \\right)^{\\alpha+x}&quot;,&quot;id&quot;:&quot;YXZHFVSSSL&quot;}" data-component-name="LatexBlockToDOM"></div><h2>Choosing the prior</h2><p>The choice of our prior, specifically the values of &#945; and &#946;, can be quite influential if we have not observed many events (certainly less than 5, although even up to around 10). When we have relevant information (e.g., a suitable reference class), we should choose these parameters to reflect that information. However, in cases where no applicable information is available, we may want a &#8220;reference&#8221; or &#8220;objective&#8221; prior that is broadly applicable.</p><h3>Acceptable choices</h3><p>Any suitable reference prior should have 0 &lt; &#945; &#8804; 1 and &#946; = 0 to satisfy the following three principles.</p><ol><li><p>Ensure that our posterior distribution is always proper, requiring &#945; &gt; 0 and &#946; &#8805; 0.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p></li><li><p>Avoid choices which cause changing inference if we change scales. As soon as we choose &#946; &gt; 0, the choice of scale matters, which is exactly what we want to avoid. Therefore, we should choose &#946; = 0 and decide on &#945;.</p></li><li><p>If we have not observed any events, we should consider the single most probable outcome (the posterior mode) to be that the rate of events is 0, requiring &#945; &#8804; 1.</p></li></ol><h3>Specific choices</h3><p>Several recommendations have been made for choosing &#945;. Note that if <em>x</em> is larger than about 5 or 10, the recommendations will yield similar results, so the choice is not critical. If you have fewer events, I would recommend trying &#945; = 1 and &#945; = 1/3 to check how sensitive your results would be to this assumption for the specific quantities you care about.</p><p>Sevilla and Erdil recommended &#945; = 1 because it closely resembles Laplace&#8217;s rule and provides the best point estimate of the time between events (in expectation). However, it tends to overestimate the rate of events significantly.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> This prior will mean that the expected rate of events is always noticeably higher than the observed rate. Furthermore, there is quite a large posterior probability that this rate is higher (see the figure below).</p><p>Kerman (2011) recommends &#945; = 1/3 because it implies that the true rate is equally likely to be greater or lower than <em>x </em>/ <em>T</em> for all values of <em>x</em> and <em>T</em>, as long as <em>x</em> &#8805; 1 (at least one event observed). This is because the median of a gamma distribution with parameters <em>a</em> and <em>b</em> is well approximated by (<em>a</em> - 1/3)/<em>b</em>. Intuitively, this seems reasonable: if we have seen <em>x</em> events in <em>T</em> time periods, we should think it is just as likely that the mean rate is less than or greater than <em>x</em>/<em>T</em>.</p><p>Another popular choice is to make &#945; very small, say 10<sup>-6</sup>. This makes the prior pretty flat, and approximates the &#8220;scale-invariant&#8221; prior that Sevilla and Erdil want to use but do not due to creating an improper posterior. Furthermore, it minimises the mean squared error in estimating the rate of events. However, this prior places far too much probability mass on extremely small rates of events before we observe one.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LgbE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb00159e9-45df-43dc-b8a4-a36d6057d5e1_861x520.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LgbE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb00159e9-45df-43dc-b8a4-a36d6057d5e1_861x520.png 424w, https://substackcdn.com/image/fetch/$s_!LgbE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb00159e9-45df-43dc-b8a4-a36d6057d5e1_861x520.png 848w, https://substackcdn.com/image/fetch/$s_!LgbE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb00159e9-45df-43dc-b8a4-a36d6057d5e1_861x520.png 1272w, https://substackcdn.com/image/fetch/$s_!LgbE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb00159e9-45df-43dc-b8a4-a36d6057d5e1_861x520.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LgbE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb00159e9-45df-43dc-b8a4-a36d6057d5e1_861x520.png" width="861" height="520" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b00159e9-45df-43dc-b8a4-a36d6057d5e1_861x520.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:520,&quot;width&quot;:861,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9733,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LgbE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb00159e9-45df-43dc-b8a4-a36d6057d5e1_861x520.png 424w, https://substackcdn.com/image/fetch/$s_!LgbE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb00159e9-45df-43dc-b8a4-a36d6057d5e1_861x520.png 848w, https://substackcdn.com/image/fetch/$s_!LgbE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb00159e9-45df-43dc-b8a4-a36d6057d5e1_861x520.png 1272w, https://substackcdn.com/image/fetch/$s_!LgbE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb00159e9-45df-43dc-b8a4-a36d6057d5e1_861x520.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The posterior probability that the true event rate is greater than the observed rate based on the number of events that have occurred, <em>x</em>, the value of <em>&#945;</em> in the posterior (for <em>x </em>&#8805; 1), and <em>&#946; </em>= 0. Note that the choice of <em>&#945; </em>= 1/3 gives a approximately 50% probability for all values, which agrees with our intuition. Due to the scale-invariant nature of the gamma-Poisson model, these probabilities hold no matter what time period the events have occurred over. All lines shown will converge to 50%, however if <em>&#946; </em>&gt; 0 then they would converge to 0.</figcaption></figure></div><p>Overall: &#945; &#8776; 0 and &#945; = 1 are the best choices for estimating the rate of events and time between events respectively. However, neither perform that well when we have not seen any events (especially &#945; &#8776; 0), and will perform poorly when we care about the parameter that the prior performs poorly on. Choosing &#945; = 1/3 provides a reasonable trade-off between the two, and has the additional desirable property that, whenever we have observed at least one event, we think that the rate of events we&#8217;ve observed (<em>x</em>/<em>T</em>) is equally as likely to be too high as too low.</p><h2>Conclusion</h2><p>Sevilla and Erdil correctly pointed out that using Laplace&#8217;s rule for a continuous observation (such as time) leads to inconsistencies. Here, we&#8217;ve laid out some details of the assumptions and use of this model. I&#8217;d strongly recommend you to make use of the full posterior distribution for your forecasting, and consider a Gamma(1/3, 0) prior.</p><p>Bonus: Kerman (2011) argues, for essentially the same reasons given here, that we should use a Beta(1/3, 1/3) rather than a Beta(1, 1) prior for probabilities.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>The rate of an event is the average (mean) number of events that occur per unit time. For example, the number of births per year or pandemic per decade.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>A proper distribution fulfils the requirements of a probability distribution: the probabilities are never negative, and the sum of probabilities across all outcomes is 1.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>This is because the gamma and Poisson distributions are <a href="https://en.m.wikipedia.org/wiki/Conjugate_prior">conjugate distributions</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Be wary that an alternative parameterisation of the gamma distribution is sometimes used, which has 1/(&#946;+T) as the second parameter. Check the documentation for any software package or that you get the correct mean.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Be wary that an alternative parameterisation of the negative binomial distribution is sometimes used, which has <em>t</em> / (<em>t </em>+ &#946; + <em>T</em>) as the second parameter. Check the documentation for any software package or that you get the correct mean.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>A proper posterior requires &#945; + x &gt; 0 and &#946; + <em>T</em> &gt; 0. As long as we have observations for a non-zero amount of time, then <em>T </em>&gt; 0 and hence &#946; = 0 is valid. However, we cannot guarantee that we observe an event (we might have <em>x </em>= 0) which requires the strict inequality &#945; &gt; 0.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>The difference here is intuitively confusing but can be explained due to considering what happens when events are rare. Here, the expectation of the time between events will greatly be affected by how much you believe that very large values of the time between events is possible. Furthermore, due to the lack of data, this belief is largely driven by your prior. Therefore, for an accurate mean (across all possible value of times between events) you prefer to underestimate the time between events, or equivalently overestimate the rate of events. This problem only occurs when considering expectations.</p></div></div>]]></content:encoded></item></channel></rss>