There are several ways different approaches can win, as well as a very plausible scenario that nothing changes at all.
As a rule, the solution to a problem where everyone fails to get behind the same option is not to introduce a new one. But that’s no fun.
What I want to sketch out here is a synthesis approach that weaves the different goals of different approaches, rather than the specific solutions.
I call this the House of Review approach - an elected house that fits into current Parliamentary dynamics, and puts a strong focus on being a place of improvement and scrutiny. A lot of the mechanics are familiar, but with some new twists that help reconcile seemingly divergent goals in a coherent approach.
Read on for how it works, the thinking behind it, and then why the solution should appeal to the three key groups of democrats, politicians, and technocrats - and why their current approaches might end badly for them.
The key features here are:
To work through the steps: At the same time as the general election, there is a second ballot where you select a party (or no party at all) to populate the House of Review. Based on the result of this ballot, parties are allocated seats to fill.
Parties have two lists of people to fill their seats. There is a standard PR list of people who will be elected in order. Then there is a much wider group of ‘aligned specialists’. This group is not elected immediately, but a selection is appointed by the party for shorter terms (allowing flexibility based on the known agenda, which might be quite different depending on who actually wins the election). Their allocation is split 50/50 - if a party is due 50 seats, they elect 25 down the standard list, and have 25 to fill from the specialist list.
Additionally, there is a ‘no party/crossbench’ list of non aligned experts and voices. Everyone who doesn’t vote is claimed by this list. As such, the size of the ‘crossbench’ goes up and down with turnout. The governance mechanics of the cross-bench process is reviewed periodically by a Citizen Assembly.
So for a 400 seat chamber, following the results and turnout of the 2019 Common election, 131 seats would be allocated to the crossbench, while the largest party (Conservatives) would have 117, split between regular and specialist seats. Overall, partisan-aligned figures are the majority, but the division between the two kinds of appointment and the crossbench create a fluid arrangement of groups.
This is taking a lot of the functional elements of the current House of Lords and fitting them into a democratic framework.
There’s a strong idea that expertise is important, with a route for people to be Reviewers that is very unlike standing for election - while still ultimately constrained by electoral mandates. Shorter term specialist seats allow for temporary secondments, expanding the potential pool of knowledge, without having a huge chamber at any given time.
The difference between election and appointment is a spectrum rather than a hard divide. Rather than an X% elected and Y% appointed approach, this leaves it more with parties about how they want to manage their permanent and specialist members. Parties can even have a little cronyism, as a treat, but not for life, and parties have to make choices about who is important enough to keep.
The resulting chamber is weird enough it’s not competing for primacy with the Commons. The crossbench mechanism prevents it being a PR chamber that highlights the difference with the Commons. Taking non-participation seriously also provides a democratic reason to have a substantial non-party political element. But as it is also a choice of the ballot, this is compatible with any future move to mandatory voting, and as a party-of-last-resort might be popular in its own right.
This slightly more flexible membership is compatible with the Brown review’s idea of mayors, devolved leaders, having speaking rights, or being able to propose legislation through the chamber. This could act in part as a body of scrutiny for other layers of government where that was appropriate.
With the crossbench selection process. I would convene a citizens assembly to work out the process as a democratic way to appoint people on behalf of people who don’t vote (or as a party-of-last-resort for people who want to choose it). This would effectively be writing operating principles for the successor to the House of Lords Appointments Commission.
There’s a few different mechanisms possible here, but personally I’d be looking for a small proportion of non-partisan “governance generalists” (continuity to preserve institutional knowledge about how to effectively work) and a wider specialist list who is brought in and out. A citizens assembly might decide to appoint a portion (or the entire allowance) of seats by sorition. I think they probably wouldn’t, but if they did, that’d be fine.
Ok, play it cool, but the stars are aligning. You have a new government, with some commitment and interest in an elected second chamber. The leadership hates PR, but that’s nothing new - and might work to our advantage here. Lots of people in the party are for it, are there dynamics where PR for the second chamber helps resolve internal arguments? There’s a path where this works out.
Against this we have the reason why this has been failing for literally a century - there’s a lot of dislike of where we are, but no real consensus on where to go. You need to lead here, but take a path that others can follow you on.
Let’s talk about the election nerd stuff. Maybe you love STV because it keeps constituencies and breaks open candidate selection from parties, but that works against acceptance here. Overlapping constituencies is challenging the role of MPs in the first chamber, and we need to have something that gives parties some of the power of patronage they’re giving up here, otherwise they’ll wreck it. Party list PR (even in this split form) gives you both something you want. On another angle, STV for the Lords makes it less likely to win that for the Commons at some future point.
We also need to buy off the technocrats by showing how their love of expertise can be managed inside a democratic house (and that we can do much better on this than the current status quo). Post-election roster management is a bit unorthodox but it meets both your requirements. People are there because of the electoral mandate of the party while the actual people are not the same identikit politicians you get in the other place - the parties are going to be pulling in the experts they need to make their case. This helps shield the approach from the (vaguely anti-democratic) complaint of making more politicians.
I’ve been assuming you’re the electoral reforming sort - but maybe you’re a deliberative democrat, wanting a House of Citizens selected by lottery. The mechanisms are in here to make your case. Using a citizens assembly to set the terms of the ‘cross bench’ selection is a foot in the door - and if they choose to use their seats for experts or citizens is up to them, and may change over time.
I’ll be honest here - there’s a chance this works out, but it’s all fallen apart so many times before. There’s a window of opportunity, but a lot of forces against it. Some of those forces cannot be appeased and have to be beaten, but others can be co-opted by finding a synthesis approach.
I know I know, this doesn’t seem important. In the short term, you don’t want to have an opinion on this. You want to kick it into the long grass, do it next parliament, maybe create a commission or citizen’s assembly to look into it.
But eventually you do have an interest in what this looks like - because if that process comes back with something popular, and it builds enough support, you might actually have to do it or be stuck with what you currently have. Here is why you, in the long run, want a solution that looks a bit like this.
You want different things from the second chamber when you’re in government and not in government. The key thing is to get something that is constructive for you in both scenarios. You want something that is subordinate, but not completely toothless. You want something that opens up a slightly different arena for politics, that is constructive when in government or opposition.
Party politics is a team sport, but making policy involves people asking awkward questions. Effective scrutiny is part of getting results, Part of the value of the current House of Lords is a release valve that helps manage climb downs in a way that isn’t a partisan fight. Letting this sort of process play out elsewhere and selectively accepting the results has value in government and opposition. A very partisan second chamber doesn’t work for this in quite the same way.
The House of Review approach tries to maintain the fundamental parliamentary dynamic while introducing elections. The House of Commons is the primary house, and the second chamber through the specialist lists and cross-bench mechanism is not recreating the same kind of elected politicians, or a pure PR chamber that raises legitimacy questions about the Commons. It has legitimacy to do the things it needs to do, but is built in such a way that it’s not massively changing how the Commons works or relates to the other chamber.
As well as their primary function in bringing useful voices into the legislative process, the double list system has some continued use in internal party management - allowing for jobs for useful MPs who lose a seat. At the same time, you have limited seats to fill - but this is also to your advantage. Moving appointments to parties gives you a bit more latitude in selection (in government or opposition), but the limited seats also gives a clear reason to say no.
The goal here is to end up with a more effective version of what we already have - but shored up with the democratic legitimacy that makes it stable. By taking an approach that preserves much of current ways of operating, while taking in the key points of critics, the transition to a new second chamber would be minimally disruptive and add new strengths into the system whether your party is in government or opposition.
This is the longest one because I don’t think the current situation is going well for you, but unlike the democrats, I’m not sure you realise it.
The House of Review approach takes a lot of its general goals from your idea of an apolitical and appointed “House of Experts” (or Senate following the Canadian example), while trying to realise them through politicised democratic means.
Your approach to date is incremental change to convert the House of Lords into an appointed home of expertise. No big revolution, just an ongoing pressure and direction. This approach has had a lot of success, but there are big barriers to completing this transformation.
Your ideal solution of this House of Experts seems tantalisingly close, which leads you to embrace short-termist approaches to reform - but these undermine your long term interests. Creating a democratic chamber with technocratic purpose provides a long term and stable approach that gives you more of what you want.
A key value you see in a House of Experts is the ability to use expertise to check elected politicians. But when the current House of Lords is threatened, you turn around and tell those politicians what they want to hear. You point out that a wholly elected system isn’t that popular or important to people, and would be a lot of political capital they could be using on other things. Yes, there are problems, but wouldn’t it be easier just to make some small changes that deal with the worst of that, and rather than throwing out the bits that actually work?
Because there is common ground in stopping big change, there is a misunderstanding that everyone agrees on what the House of Lords is for. Every few years you write your letters and reports saying “Please Mr. Government, stop doing cronyism and corruption, it’s ruining our nice technocratic chamber”, and… nothing happens. The problem is the cronyism is the point. It’s a key appeal of the current system that keeps it in place. The same inertia that protects your experts-in-robes, also protects the worst people who have taken refuge in the Lords.
Your defence of what’s currently working is earnest and well meant, but is also providing cover to the parts you hate. No one has to come out and say “actually we’re opposed to an elected House of Lords because the current cronyism mostly works for us” because your much more palatable argument is making the case for them.
For political expediency, you tie your projects together. For the moment this is effective, but might mean one day you lose everything when an elected chamber sweeps this all away.
You might say I’m being unfair - and your incremental approach is a plausible way of getting the true technocratic chamber you want. If we get a stronger system of independent appointments, remove the bishops and the remaining hereditary peers, and put a cap on numbers, we have ended up pretty close to where we’d be if we’d set out to design it from scratch. None of these are big changes, and there’s clear public support for all of them.
But while your changes are on paper smaller - they run into the same problem of an elected house in that, rather than do nothing, you are asking the government to spend time and money making their own lives harder. The big problem a new government has is being under-powered in the Lords, coming at the end of a long spell of the other side making more appointments. They have the ability to fix this by doing lots of appointments - and the result is incredibly messy.
From the technocratic point of view, the fix is obvious. Create a more apolitical appointment process that strengthens the cross-bench, and this unlocks a lot of retirements knowing they are not directly benefiting the other side. Political appointments remain, but less are needed to improve the government’s hand - and it is less all-or-nothing because the central balance is increasingly held by the technocratic cross-bench.
Achieving this approach would be a big victory for the House of Experts model - but like everyone else’s reforms, it is asking a government to take time out of their schedule to do something to their disadvantage. Looking at the rest of their manifesto, politicians might ask, this seems a bit slow, do we have to do it now? Can’t we just leave it as it is, let it get a little bit bigger, while we deliver the policy agenda we were elected on?
You might make arguments about long term benefits, but these are the same arguments everyone else has been making and you’ve been undermining. While inertia works with you in seeing off big threats to your appointed technocratic model - completing this reform is more difficult than it seems.
The incremental approach has been racking up long term problems. You have completely changed what a ‘Lord’ means, and are wearing the ermine of your slain enemy. This was useful in sneaking in, but to the outside, your problem is now that people don’t see the difference. The House of Lords is so unpopular, and you now are the House of Lords. You can try and wreck bigger reforms and hope the small incremental approach continues to work for you - but there’s just a real risk that in your alliance with the status quo, you are too much part of the problem if those reforms start to gain pace.
So, taking this on board - if there’s energy towards big reform, how can you move that towards the outcome you want? Because your project to date has worked by small degrees, it hasn’t had to engage with popularity. There’s no pressure group out there making the earnest case to the public rather than whispering to politicians to slow down. But that doesn’t mean this is an impossible catch-up job. You know an appointed house is not that popular, but you also know another house full of politicians isn’t a slam dunk. If you get this bounced to a commission or an assembly - there’s a chance that, on reflection, the virtues of your approach will win through.
In your thinking you need to embrace democracy as the answer, not the cause, of your problems. Your attitude of “actually the most functional part of the British system is the least democratic aspect of it” is corrosive and hurts the whole system over time. Democracy can mean a lot of different things - it’s time to think about how you can work within that framework, rather than undermine it.
You are not going to get what you really want through the status quo - it’s time to think bigger. An elected House of Review, with strong technocratic vibes, can more strongly justify greater use of powers to delay legislation and win compromises. Finding a synthesis approach with the democrats gives you more of what you want in the long run.
The House of Review is one possible synthesis approach - taking the things you value, and articulating them in democratic language. It’s not the only such synthesis, but it’s the kind of way you should try to think about.
As I said, the solution to a problem where everyone fails to get behind the same option is not to introduce a new one. It’s also a trap to get too attached to particularly fiddly or clever approaches. There is no shortage of clever ideas in the world, and change is more directly about creating and unlocking coalitions that can move things along. Clever ideas can be part of this, but only part.
So while I quite like the roster management and party-of-last-resort approaches - I think the important thing in the above is thinking through what different factions actually want. Rather than the solutions they bring up, what do they actually want to do? Are these actually incompatible? How can we build coalitions that advance a better solution? We might need clever ideas to stick the landing, but long before that there needs to be a willingness to reflect on what is important, and listen to the problems other people have - if not always their solutions.
Header image from ChatGPT
]]>The key points:
Kieran Healy’s article on the problems of Nuance in sociology talks about three key problems with how it is used in that field:
While it’s all locked up a bit in academic-language, these critiques generalise to a lot more writing - including mine! Here is my current thinking.
Nuance is not a virtue in itself, but only when it is in service of clarity and accuracy. Nuance isn’t bad but appeals to the need for nuance in all circumstances are. There is no inherent value in complexity and it is too easy to appeal to a generic need for nuance. It is harder to clearly explain what the specific problem with the lack of nuance is in this case. If you can do this, then you probably don’t need to talk about nuance at all.
There is a tension between clarity and accuracy to be navigated, with trade-offs in both directions. Simple explanations saw off details. But too much focus on edge cases inflates their relative importance compared to simple explanations. The really hard job in writing is walking the correct line between these two goals, to the service of the purpose of the writing (note: ideally figure out what this is).
It is always possible (and true) to say it’s more complicated than that - but this in itself adds little. What this is implying is you are a more subtle and sophisticated thinker than what you’re critiquing. Maybe you are, and that’s exactly what you want to imply! But there’s no value in being a subtle and sophisticated thinker if you can’t explain why this is important. And if you can, you can do it without the sideswipe (unless you want to of course).
Appeals to complexity can make a topic seem hard to approach, by suggesting only people who have fully absorbed the complete subject matter can have an opinion on a topic. This is probably the opposite of what you intend. People don’t need to know as much about you to have an opinion, they just need to read your great writing on the subject. In this way, appeals to complexity suggest a redraft is needed.
The first draft may be the story of the different layers of knowledge and reversals (telling the story of your journey), but the second draft should always be clear about what the simplest description of the most true thing is. The reader should be able to stop, understand what you meant to say, and with an understanding that is mostly true, at any point. Complexity should unfold from simplicity, rather than be presented in opposition to it.
In short, nuance: not good, not bad - just unimportant. It’s not the road in itself to good or impactful writing, or to helping people understand complicated ideas. Instead, focus on if the writing is striking the right balance between accuracy and clarity.
Header image from ChatGPT
]]>This is a sequel to a previous blog post, where I wasn’t convinced by the arguments being made about it being especially democratic for MPs to change the prime minister on their own (even if practically they can and have). This blog post goes a bit further into the debate about if MPs are the right or wrong people to pick party leaders in general.
Quick summary:
Bronwen Maddox writing in the FT argued that the practice of party members selecting leaders was becoming more obviously indefensible, and only had the appearance of being more democratic:
The motive in both parties for giving members a voice is clear — it seems more democratic. But there are never going to be enough of them to give a sense of real legitimacy. Because they are self-selecting activists or at least committed enough to politics to choose to pay for a party membership, they will never resemble the electorate overall.
Provided that the UK keeps a parliamentary system based on parties, it might be better to give MPs the decisive say. They are at least elected by the whole country. It would provide a more defensible process than the one now under way. Meanwhile we will have to watch for another six weeks, knowing that the candidates are playing on a national stage to a tiny gallery.
This gets at what a lack of follow through from people against the membership having a say - having articulated that the membership have failed to fix a problem (they do not resemble the electorate overall), the solution gives up rather than looking for approaches that could do this.
After all, if the membership are unfit because they’re a weirdly political group unrepresentative of the overall population, this raises real issues about letting MPs choose. Arguments about the demographic breakdown (or how well members work as a proxy for the electorate) are reasonable arguments unless your answer is in favour of a different group with the same problems. Either this is a problem or it isn’t. I think it is a problem that we should try and address rather than ignore.
This is tied to an argument that MPs are suitable in a different way - being democratically elected. This issue gets confused because it’s part of a broader question of how much leeway MPs actually have to make decisions on our behalf. When we’re talking about MPs’ decisions being inherently democratic, some people go “of course”, because they hold that MPs have been given fairly unlimited ability to make decisions on our behalf (representative model). But others think that what’s happened is far more limited - parties and MPs make pitches at elections and then are constrained by what they say they’ll do, and when that isn’t clear, they should come back for fresh instructions (delegate model).
We know from polling that there is a real elite/public split on this - with MPs (and candidates to be MPs) holding they have fairly wide power to use their own judgement, while the public being more likely to see their mandate as limited. In practice, what happens is a mix of these positions. The political system both gives MPs unrestricted practical power, but what they actually do is constrained by norms of behaviour informed by the delegate model (e.g. voters actually endorsed the party manifesto, even if your name was on the ballot, so the democratic thing to do is to follow party instructions to implement it rather than have your own opinions).
Because the idea that government MPs can ‘democratically’ change their leader (and policy direction) does not fit well with the experience of living through an election, when representative model believers are building supporting logic for this (to them, common sense) view of politics, things start to fall apart. In the case of the quote above, Conservative MPs are not elected by the whole country (only by the areas that elected them), and also do not represent all Conservative voters (because many live in areas without Conservative MPs). The idea that “parliamentary democracy” depends on the leader having the confidence of MPs gets stretched until people are arguing that a PM with the support of only half the party (and so one-quarter of MPs), is especially democratic.
In other cases, the argument is made that MPs are direct representatives of the voters, but this is sliding over the exact choice made in the election. The party voters did not have a choice of multiple candidates for the party, so it is difficult to say their voting choice is significantly affected by the candidate’s personal views on the future of the party. There is no mechanism where distribution of views in the party electorate will consistently be reflected in distribution of views among party MPs. People are making vote choices between parties, while voting for individuals - confusing how we talk about the democratic choices being made. We might broadly say that a constituency has authorised an MP to be a Conservative MP rather than a Labour MP, but have not given any clear view on what kind of Conservative to be. In practice, MPs have leeway here, but again, mixed with norms about how they should behave, which at present includes consultation with members and the local party.
(The only people who may have chosen an MP for their factional views are the party selectorate who chose them as a candidate. But this is probably one of the few groups strongly in favour of member decision making - in practice, they would reserve the right to make this decision themselves and not defer to their MP.)
I don’t find it convincing, on democratic grounds alone, to prefer the decision of party MPs to party members making the decision of the leader. If the problem is a change of direction, neither group is authorised to make this change on behalf of voters.
There are practical reasons why we might want MPs to be able to change prime ministers without an election (speed, maintaining standards in office, if someone is unable physically to continue in the job, etc), but we should also want MPs to have awareness of their democratic inadequacy to make certain decisions. In practice, this means parliamentary parties should collectively be willing to endorse the results of processes that go beyond MPs and include different voices. This doesn’t have to mean members alone, and we should be creative in thinking about ways of bridging the gap.
If there isn’t a democratic case to give the choice to MPs, what other arguments are on offer? Another approach is the good ol’ fashioned belief that MPs are just better people who make better decisions.
Writing for The Times, Jonathan Sumption (a former senior judge) thinks MPs should make the decision on the party leader. He makes some questions about the democratic credentials of party members, being concerned that Conservative members are older and wealthier than the average voter (which again, if a legitimate problem obviously rules out MPs), but the substance of his argument is fleshing out a proper case for elitism. MPs should make the decision, because they will take the wider view, which members will not:
When choosing a new leader, MPs and party members have a very different outlook. MPs are there to represent the interests of their constituents and, in a broader sense, the public interest, whereas party members represent no one but themselves. MPs will look mainly to the impact of their choice on the electorate at large, because that will determine their chances of re-election. They know that this will involve a large measure of ideological compromise. By comparison, party members are rarely interested in ideological compromise and are inclined to look no further than their own political positions. They will choose someone who shares their prejudices, and kid themselves that the rest of the electorate will see the light.
This is a very charitable description of MPs, that just doesn’t hold up with the evidence. Many MPs are, obviously, intensely political people who hold ideological views outside the mainstream, which affects how they view political decisions. They are more ideological than party members (only some of which are political obsessives). This New Statesman article and the linked Mind the Values Gap report on where MPs, members and the public are are well worth reading.
Sumption tries to say that for MPs the public and private interest are aligned - and that’s only sometimes true. Not all MPs are equally exposed to the shifting moods of the electorate, some have a majority that (generally) gives them comfort of making ideological bets. But there’s another element where there is just a straightforward conflict of interest. The decision about the party leader (which should be made with a view on the public interest) is mixed in with the private interest of individual MPs.
Because of the huge power of patronage of the party leader (and especially of the prime minister), there is a strong motivation for MPs to try and get on the winning team. For high profile MPs (who might be potential candidates themselves) promises of significant positions might be part of the deal making process to endorse. Even if the new leader isn’t especially vindictive, there may be people ahead of you in the queue who did back the winner. From the other perspective, there are potentially high profile rewards in backing a longer-shot candidate when prospects for advancement under the frontrunner are low.
There is a mix of private and public motivation in this, and deals making between complimentary wings of the party is part making a compelling offer to the party as a whole. But the private motivations are there. Maybe the idea that there are MPs who backed Truss just for career reasons is just an unfair slur by their opponents, but it can’t just be glossed over. Whether you take it at face value or not, Rory Stewart tells similar stories of MPs telling him they had to back Johnson rather than him in the 2019 election because of the potential impact on their advancement.
At the least, consideration of the private interests of MPs should stop us making sweeping statements about how only MPs represent the public interest alone. Members may have faults, but don’t have their career advancement riding on their leadership choice. Their distance from power has some utility.
The power problem is a bigger issue for the governing party, but there is a different problem for opposition parties: a “high tide” problem. By definition, their MPs are not representatives of the areas that are required to reach a majority, and the views of voters in these areas are not represented even indirectly. The optimal party leader is the median of the party-to-come, not the party-that-is. A leader very popular with existing MPs might be doubling down on failure. This is one reason why MPs might want systematic ways of including other voices in the direction of the party - there are missing people in the room.
This leaves us with the realist case for elitism. Fundamentally existing elites have power, and processes need to include existing elites to stop them undermining the result. This is the most convincing case for MPs having at least some decision power - they cannot be fired, and have to be worked with.
Sumption makes the point that “So far, no UK party leader chosen against the preferences of its MPs has ever gone on to win a general election” - and even accepting this from the small sample (and not bringing up the many leaders chosen by MPs who similarly failed at this task), there is a practical edge to this. MPs working with, or against, the leader obviously impact how effective they can be. Even a small number working against you can be bad!
But the implication of this point is that you need to win big, or you need to manage the large minority who might not be happy with you. Having 45% of your MPs significantly against you is still a big problem when fundamentally you need everyone to throw their weight around together to get things done in a Parliament that contains other parties.
The lesson of the realist approach is not that MPs decide once to make you leader and are bound by that, but MPs decide all the time how well they work with you. It’s a mistake to see a process that guarantees a majority of MPs have said they support you most as solving problems of divided parties. Having strong endorsements from groups outside MPs does not guarantee, but would be part of the argument, that non-backers should support the project.
But what’s the alternative? We have to be more creative. Members are bad? Sure! MPs are bad! Oh no! How should parties make these decisions?
One way would be to take this concern that MPs and members are not representative of the voters seriously and assemble a group of voters to act in some part of the process. There’s some element of this in Labour’s £3 membership approach, but it’s not targeted enough. The goal isn’t mass participation (that’s what elections are for), but to make sure the kind of voices that people complain are missing from current processes are deliberately present.
More in depth than polling, this assembly could be used to really road test different directions for the party. There’s no need to follow the usual logic of an entirely representative group of people. Government parties could probably comfortably stick to their voters, but opposition voters may choose to start there and include other demographics they want to make more progress with (this would be an argument in itself, but a structurally useful one to have!).
This could be used as a replacement or addition to members’ votes in an electoral college, or as a sorting stage to narrow the field of different directions before a vote by MPs or members.
Another change could be to increase the constructive overlap of different parts of the process, rather than highlight divisions that may be artificial. The problem with the current process is not that the membership imposed a winner on MPs, but that the two groups pointed in different directions at all.
For instance, this assembly could use an approval voting system (where each candidate can be rated, and there’s a threshold to advance) rather than a head-to-head competition. This avoids the situation where two different parts of the process are pointing in different directions (some candidates may have greater support, but they’re all good enough) - while guaranteeing MPs are choosing from approaches that have real grounding in the groups they need to win an election.
(For the reverse version of this, if Conservative MPs had used approval voting to flush out which candidates could have majority support, it could have been much clearer if members were really imposing an unpopular choice or not, without there being a “winner” for the members to reject).
The important thing from my point of view is not to boringly retreat into arguments for elitism. If the argument against members implies the process should be wider - open it up! If the argument against MPs is that they have too great an opportunity to personally benefit from decisions, it’s really good to have other groups in the mix who have more distance!
Having a clear head about the merits and problems raised by the participation of different groups helps keep the eye on the prize - for parties, leaders who can lead them to popular support in winning elections. And, for the public as a whole, parties that are at least semi-grounded in the issues facing their voters, rather than issues that only animate the highly politically engaged.
Header image: DALL-E prompt
]]>In substance this is a differently opinionated version of simonw’s python library template and approach to GitHub templating. There are more details on exactly the default features included on the repo page itself.
More fiddly is some small changes in how the templating process works, most of which are overkill for this library, but solve some problems I’ve run into with more complicated templates.
This repo combines the double repo approach (a cookiecutter repo and a GitHub template repo that clones the cookiecutter repo) into one repo that can do both.
I’m using pytest and a GitHub Action to do automated tests on the health of the template itself. When the template is updated, a GitHub Action tries to clone the template, and then run the internal test suite of the instance of the template.
Having the two repos in the same place also gives new workarounds to the problem of copying across GitHub Actions into the new instance. GitHub Actions does not let an action create a new or change an existing action, just delete them. This means that a new instance where the template contains GitHub Actions can’t automatically be committed to GitHub by a repo. One way of solving this is doubling up: both the cookiecutter and the GitHub Actions repo contain the GitHub Actions for the new template. This means that when the cookiecutter is run, it does not modify or add any actions because they are already in place. This approach lets you have just one set of files. I store the actions outside the cookiecutter folder (so they are already in place when GitHub uses the template). To keep this working with cookiecutter, a cookiecutter hook copies them into the new instance from the template directory.
The GitHub bootstrapping method works through an action that only unfolds the template when the current repo name does not match the repo that is the source of the template. So when the repo is created through a template, it’s a new push event, that triggers a GitHub Action, that then can see it is no longer in its home repo, and triggers the bootstrap. I’ve generalised this a bit - rather than exactly matching the origin, the bootstrap process will run in any repo that doesn’t end with `-auto-template`. This makes it a bit easier to fork, and create new variations on without needing to make changes to all the template_level actions every time.
Header image: Dall-e prompt (A robot looking confused working with scissors and paste)
]]>For instance, in this twitter thread Robert Saunders makes the case that it’s undemocratic that party members pick the prime minister. This seems reasonable. But he also says that party MPs picking the prime minister would be more democratic. I find this very difficult to make sense of. He’s not alone in this (and I’ve seen various stray tweets), but it’s one of the most specific write-ups so I’m picking on it.
I think this pair of opinions comes from a dissonance between believing it is legitimate for a prime minister to change without an election, and the obvious weirdness of party members having their own election to choose a new prime minister. In the UK system nothing requires an election when the ruling party changes its leader, and this has happened fairly often. But this is strange, because the democratic elections we do have are obviously about choosing a prime minister.
Parliamentary Democracy accounts of UK politics say that people elect MPs, but do not elect the prime minister. But this doesn’t fit well with how parties present themselves, or in how voters make decisions. MPs themselves have weak personal votes, and (generally) rise and fall with the fortunes of the party. Voters themselves use evaluations of the leaders in decision making, and this shows up in correlations between feelings towards the party leader and voting for the party. If people are not “voting for Boris Johnson to be Prime Minister”, they are “voting for the Conservative-Party-with-Boris-Johnson in charge” to be in control of the country. This is also just really clear in how parties talk about themselves and their leaders. “We elect MPs not the PM” is technically true, but so obviously not a description of our democracy that it raises far more questions than it answers.
Given the election process is clearly about choosing a prime minister, it’s odd that the Prime Minister can also change through a completely different process. We had a big public fight everyone could get involved in about the choice of prime minister, and now there’s an “election-like thing” that only some people get to take part in? The attitude that people who find this odd are uneducated, rather than noticing an obvious misalignment, is very annoying.
The “MPs choosing is democratic” argument tries to reconcile an educated understanding of Parliamentary Democracy with also thinking this not-election is odd. The problem with this argument is the slide between saying that “it’s good and democratic that the Prime Minister is chosen by MPs” and “it’s good and democratic that the Prime Minister is chosen by MPs of one party”. A government, in practice, does not draw its democratic authority from the confidence of parliament as a whole. Its power is a result of parties/coalitions acting as block votes, amplifying internal factions and excluding the minority (opposition) view.[1]
What “Confidence of Parliament” means in practice is that the majority faction continues to value accepting the outcome of internal processes. The important thing is this collective agreement could be anything. If we’ve accepted this kind of block voting as a legitimate part of Parliamentary Democracy, there is no difference if the prime minister is picked by MPs, party members, a citizens assembly of party voters, choosing the tallest candidate, or a random lottery. Whatever rules the majority want to adopt internally leads to outcomes that “a majority of MPs support”. Arguments can be made on other principles (democratic or not), but if Confidence of Parliament is your criteria, there’s no reason to prefer one of these options over another.
All the supporting arguments made for MPs choosing being democratic make much more sense as arguments for elections. Here are a few:
Policy change
It’s undemocratic for policy change to result from pitches to an unrepresentative minority of the public. True! But this problem is obviously worse if just pitching to PMs. This is already obvious in the current leadership election, where pitches have been made explicitly on the ability to help colleagues get elected or to give them a slush fund to make their constituency work easier.
If a change of policy direction is democratically significant (and there are good reasons to see it that way), this requires an election, regardless of how it came about.
On the other hand, this is also a good argument that a change of leader without a substantive change in direction does not raise democratic questions. A contest limited to MPs might be better able to answer the question “who is best placed to carry out the agenda we have already approved in an election”. But in practice, choosing a new PM generally means something has gone wrong and potential leaders understandably want to pitch their take on what to change. In which case, a new direction is good - and an election also good.
Accountability
One argument made is that MPs are accountable in a way that party members are not. But we hold MPs accountable with elections, and so if there is not an imminent election, there is no opportunity for accountability. If accountability is important, what you want is an election to validate the choice - and fast.
Practically, in an election, MPs that made a choice will not be judged by that choice - but by the outcome of the process. If an MP said Candidate A should be PM, but Candidate B is appointed by party MPs or the party members, they are still being judged by their new leader - Candidate B. There is no meaningful difference in how the public can hold the choice of MPs, and the choice of party members to account. You need an election.
Demographics
Party members are not representative of the country. True! But neither are party MPs (in some cases they are even further off). Again, if this is your concern, hold an election.
The Parliament website has an explanation of general elections. The first sentence of the section on who chooses the prime minister says “The Prime Minister is appointed by the monarch”. This is true, but also unimportant. The prime minister is chosen by the public electing MPs that are aligned with them through the election. There are edge cases that are more fiddly, but this explanation is much more useful than the mostly theoretical times the monarch is important. Parliamentary Democracy explanations of what is happening end up emphasising completely unimportant points before explaining what is actually happening in a footnote.
There is a dissonance between the learned reality of British politics (we elect the prime minister) and the educated view of British politics (actually we don’t). This leads to arguments being made about democracy that don’t really work. If you have a democratic problem with party members choosing the prime minister - the only logical thing to think is that this choice needs to be confirmed by a general election. You might just think MPs would make a better decision - and that’s fine! But it’s not a democratic argument.
[1]: You could imagine a different situation where we do not accept block voting or even majority rule (some of the language in this area is because Parliament at points in its history operated more by consensus). In this situation, MPs could rank their preferences for PM and they are elected from the house in this way. The winner would still be from the majority faction, but would be one who is more towards the median position of the whole house - but may be well out of the party mainstream. “Parliamentary Democracy” explanations don’t distinguish between these two scenarios - despite the fact they would represent very different political systems. It’s useful as a factual description of “the rules”, but a very thin description of democracy in itself.
]]>I’ve released a new package with an implementation of a pipe feature for python - function-pipes. It can be used as a package, or as it’s only dependent on the standard library can just be copied as a file and dropped into a project.
There’s a few different packages that already do this, but mine works better with type checkers and should generally have less overhead than other methods.
I have overthought what is quite a simple function, so I am writing up my overthinking. Read on for adventures in type checking and marginal Python efficiency gains.
R has a very nice syntax for passing the result of a function as the argument into another function.
This means instead of:
summary(head(iris))
You can do:
iris |> head() |> summary()
In the magrittr version of the pipe (%>%) the value being piped through can be moved to different positions using .
as a stand-in for the value, while the base R version cannot move the value around.
This idea of a pipe is quite useful, and there have been a few different attempts to write packages that port it to Python.
I wasn’t quite happy with any of these (as apparently neither were the previous people who all wrote their own), and so tried to piece together an approach that I liked.
The rough criteria were:
Looking through existing packages, there were four general approaches.
One library (robinhilliard/pipes) handles the idea of a pipe by using a rarely used operator as a stand-in, and then using a decorator to rewrite the AST tree of the function so that it looks like Python expects.
@rewrite_pipes
def wrapper(value):
return value >> func
result = wrapper(value)
This is one of the options that produces the most efficient final code, because as far as python is concerned, it is just syntactic sugar for the hard to read but basic approach.
The drawback is that it has to run inside a decorated function (and so can’t be used everywhere), and that linters and type-checkers don’t understand it.
This is the approach in the pipe21 library, and it looks quite nice! It also has the advantage where the function itself is about 10 lines of code, and can easily be dropped in a module.
result = value | Pipe(func)
This approach works using the \_\_ror\_\_
comparison method. While comparisons are normally handled by passing the right-hand item into the comparison of the left-hand item, if there is no good comparison there, it looks for the reverse function on the right-hand item. In this case, the usually unused reverse or
. This means that the smart functions in the Pipe class can be introduced after the value, and at any point, the chain can stop and returns the correct value.
The Pipe21 library has the advantage of being able to describe in a very tiny way. It’s less than 10 lines that can be copied into a project.
There are problems with this approach. The |
(especially in typed applications) is becoming more popular for its intended ‘or’ use, which does not have a direction in the way the pipe does. It’s use in this way is not consistent with other ways of using the operator. This approach can be type-hinted, but it runs into a specific problem with lambdas I’ll explain further down.
The similar approach of using an Infix operator to create the function can be a bit more explicit about the direction by making the operation itself a class that sits between two values, but is then introducing quite a lot of abstraction and repetitive syntax.
result = value | pipe_to | func
This uses method chaining to add new functions in a way that isn’t dissimilar to how pandas addresses it. A container object takes in the value, and then successive functions are applied to it using a pipe method call.
result = Pipe(value).pipe(func).end()
The big problem with this approach is there is no way of the Pipe knowing when the last entry is, and so the value has to be accessed explicitly with an end function. This version can work OK with type checkers, including lambdas.
(note: I’ve lost the link to the blog post that discussed this version, will re-add if I find later.)
This approach is very simple. A pipe function takes the value and a list of functions. The function then returns the final results. The very simple version of this (four lines) can be found in the functoolz library.
result = pipe(value, func)
This doesn’t rely on non-python syntax, and is easy to follow. Although the way you have to do it is tricky, it can be written in a way that type-checkers can understand. I’m building on this approach.
A problem with pipes is what to do when a function needs extra arguments, or the current value not in the first position.
I had a working approach that looked like this, and built on the functools partial
to have a placeholder value that could be filled in with the current value (as with the magrittr library). My approach was similar to the way the pipetools
library uses it’s X-object
- having a stand-in object that can be replaced later.
pipe(value, pipe.func(function_that_expects_the_value_as_keyword, foo=pipe.value))
This solved the problem - but is requiring people to understand a few new things, and it cannot be made to play well with type checkers. I eventually concluded that Python has an ok tool for exactly this situation - lambdas - which also have the advantage of being understandable by type checkers already.
pipe(value, lambda x: function_that_expects_the_value_as_keyword(foo=x).
So at the user facing end there is no clever stuff to learn. Just a function that takes a value and a list of functions. If you need to add arguments, use a lambda.
The clever bits are then all behind the scenes, with the only bit of non-standard python being the idea of the pipe
function.
I quite like type checking, and even if it’s not good for all projects, I think that a basic function like a pipe
should play nicely.
Most of the approaches above can be type-hinted, but the Comparator class one only in a way that is incompatible with lambda - which shows an interesting limit of how the type checkers work.
This isn’t a class, and so doesn’t inherit from Generic to define the arguments. This approach isn’t elegant, but works.
A basic version that knows it wants a value and then callables can be written like this:
def pipe(value: Any, *funcs: Callable[[Any], Any]) -> Any:
...
This also gets across the important point the functions expect only one real parameter, but it does not get the chain aspect - that the input of the first function is the same as the initial value, and the input of the second function is the output of the first. For this we can’t use the *args approach, and need to talk about each function individually. Here is the simplist example:
InputType = TypeVar("InputType")
Output1 = TypeVar("Output1")
Output2 = TypeVar("Output2")
def pipe(value: InputType, op1: [[InputType], Output1], op2: [[Output1], Output2]) -> Output2
...
How do you scale this? You just keep adding overload options:
InputType = TypeVar("InputType")
Output1 = TypeVar("Output1")
Output2 = TypeVar("Output2")
Output3 = TypeVar("Output3")
@overload
def pipe(value: InputType, op1: [[InputType], Output1], op2: [[Output1], Output2]) -> Output2
...
@overload
def pipe(value: InputType, op1: [[InputType], Output1], op2: [[Output1], Output2], op2: [[Output2], Output3]) -> Output3
...
def pipe(value: Any, *funcs: Any): # type: ignore
This gets annoying to do manually, so I’ve made a jinja template that automatically creates it.
This approach does mean there is a hard limit to the number of functions, but in practice the circumstances where I’ve wanted to use a pipe rarely goes past 5 or 6. I’ve set the limit in the library to 20. In principle, you could have a final fallback option for an infinite pipe, but I’ve chosen not to do this.
This approach works great for typing, the trick is that the pipe
method returns (or pretends to return) a new instance of the Pipe object. Then the instance only has to know about the class of the input method, and will take the input method from the output of the function if one is available.
from typing import TypeVar, Generic, Callable
from __future__ import annotations
InputValue = TypeVar("InputValue")
OutputValue = TypeVar("OutputValue")
class Pipe(Generic[InputValue]):
def __init__(self, value: InputValue):
self.value = value
def pipe(self, func: Callable[[InputValue], OutputValue]) -> Pipe[OutputValue]:
return Pipe(func(self.value))
def end(self) -> InputValue:
return self.value
This approach can even be typed in a way that will work for pre-defined functions and classes.
I = TypeVar("I")
T = TypeVar("T")
P = ParamSpec("P")
class Pipe(Generic[I, P, T]):
@overload
def __init__(self, f: Callable[[I], T], *args: Any, **kwargs: Any):
...
@overload
def __init__(self, f: Callable[Concatenate[I, P], T], *args: P.args, **kwargs: P.kwargs):
...
def __init__(self, f: Any, *args: Any, **kwargs: Any):
self.f = f
self.args = args
self.kwargs = kwargs
def __ror__(self, other: I) -> T:
return self.f(other, *self.args, **self.kwargs)
The different overload options are handling a situation where additional arguments for the the function can be passed to the Pipe. So this would be typed correctly, if the foo
argument had an extra_param
:
result = "" | Pipe(foo, extra_param=True)
Static type checkers can handle the following, because the str class can say what the output is in advance.
result = 5 | Pipe(str) # result is string
However, this doesn’t work for lambdas, because all the parameters of a lambda are unknown. This was OK for the previous example because the Pipe object had encountered the input type first (when the Pipe was created), could feed this into the lambda and it could infer the output value. In this case the object encounters the lambda when the instance of Pipe is created, before it later encounters the 5 as part of the comparison. This means that the lambda ‘x’ is unknown, and so the value returned by the lambda is also unknown.
You can fix this by adding some typing around the lambda, but this is again having to learn something new and adding extra layers to simple calls.
SingleInput = TypeVar("SingleInput")
LambdaOutput = TypeVar("LambdaOutput")
class TypedPipe(Generic[SingleInput, LambdaOutput]):
def __init__(self, f: Callable[[SingleInput], LambdaOutput]):
self.f = f
def __ror__(self, other: SingleInput) -> LambdaOutput:
return self.f(other)
t = range(5) | Pipe(str) | Pipe(",".join) | TypedPipe[str, str](lambda x: x + "hello")
# t is str
While the function approach has the ugliest typing approach, it still works, and means a relatively simple syntax can be used. Importantly, this syntax can be optimized far more other approaches.
Having got the basic approach, I had a think about if there was anyway of reducing the overhead of using a pipe.
My final approach generates large amounts of boilerplate code to avoid loops and value assignments and includes a decorator that rewrites the AST tree to speed up the process.
The most basic version of a pipe function looks like this:
def pipe(value, *funcs):
for f in funcs:
value = f(value)
return value
This is very compact, but has a speed penalty over what the hard-to-parse but original code would be.
For instance, rewriting d(c(b(a(value))))
as pipe(value,a,b,c,d)
requires going through a loop, which unpacks each function to f
, and keeps updating value
. The original just keeps passing the value up without assigning it to any intermediate variables. As calling the function at all has an overhead - this is would be good to reduce.
One way of addressing this would be:
def pipe(value, op0, op1, op2, op3):
return op3(op2(op1(op0(value)))
This only has the overhead of the function call, but has much less value assignments.
I tried a few different approaches to making this approach for variable length pipes, and the fastest turned out to be:
def pipe(value, op0, op1=None, op2=None, op3=None):
if not op1:
return op0(value)
if not op2:
return op1(op9(value))
if not op3:
return op2(op1(op0(value)))
Here unpacking the values in the function signature turned out to be faster than unpacking a tuple later on, and a basic truthy comparison was better than an explicit comparison to None. The new switch case
syntax does let you decide cases based on the length of a function, but this was slower than this approach.
Like the type hinting, this is inelegant, but can be easily generated through a jinja template. There is some unnecessary overhead where you are assigning values you never use to the later functions, but setting this at 20 was still quicker than using the basic looped version of the pipe. There’s a test that checks this approach is faster than the basic method.
You could make this even quicker by providing pipe2
, pipe3
, pipe4
methods that are fixed length, but that seemed to be adding extra things to learn. Instead, where performance is absolutely required, we can just abstract all the entire process and do the same thing by a more complicated route.
Way above, I liked to the robinhilliard/pipes library, which rewrites the AST of functions to make the >>
operator work like a pipe.
This version does not work for type checking because it is introducing unexpected syntax into python. However, if the rewrite is taking something that type-checking does understand, but rewriting it to be faster behind the scenes, this approach play well.
Taking the basic code from that library, I created a new set of rewrite rules that reorder how functions are called:
@fast_pipes
def func():
return pipe(value, a,b,c,d,e)
# is equiv to
def func():
return e(d(c(b(a(value)))))
I then took this a bit further. As using a lambda function introduces an extra function call into the pipe, why not expand those out at the same time? I added some new rewrite rules that expressed the effect of a lambda in the pipe in a more basic way and avoided some extra function calls.
@fast_pipes
def func():
return pipe(value, lambda x: foo(value_slot=x))
# is equiv to...
def func():
return foo(value_slot=value)
Where the value is used multiple times in the lambda, it introduces a hippo operator to cache the value of the previous step:
@fast_pipes
def func():
return pipe(value, bar, lambda x: foo(value_slot=x) + x)
# is equiv to...
def func():
return foo(value_slot=(v := bar(value))) + v
This is horrible to read if you actually wrote it that way, but is a useful efficiency gain in the rewrite.
This function is now solving the same problem three different ways. It has to work through type hinting logic, it has to work through normal python logic, and the rewritten AST logic has to work. Typehinting testing is done through pytest-pyright. The package contains tests that rewritten functions are functionally equivalent and that they run faster than the un-rewritten functions.
One thing I learned looking at all the different approaches is that no one likes using anyone else’s pipe library and everyone has their own approach. This is mine, I learned a lot along the way, if other people find it useful, that’s nice.
]]>lock-defaults
package. This is mostly a little toy function that gets rid of the need for an annoying python pattern.
Can be installed via pip or just by copy and pasting from GitHub (only depends on standard library).
I’m using this project to explore useful approaches to pull into a standard template for a python library.
Will write that up more later, but I’m trying an approach where if there’s a GitHub Action that checks for a difference between the version of the package on pypi, and the version in poetry. If there is one, it publishes the new version automatically. Before it does that, it runs pytest, which includes a check that the changelog has been updated for the new version.
Python has a weird behaviour around default values for functions. If you use an empty list as a default argument, things added to the list during the function can hang around for next time the function is called. A common pattern of dealing with this is the following:
def func(foo = None):
if foo is None:
foo = []
But this looks rubbish! And gets worse when you add typing:
def func(foo: list | None = None):
if foo is None:
foo = []
You don’t need that workaround for any other of default value. Why does the list parameter have to pretend it can be None, when that’s not the intention at all?
The lockmutabledefaults
decorator fixes this by introducing what should be the default approach, and default values that are lists, dictionaries or sets are isolated in each re-run.
@lockmutabledefaults
def func(foo: list = []):
pass
After seeing some examples of people doing this with other microcontrollers, I wanted to experiment a bit with a vintage electronics project and get a bit closer to how it all works behind the scenes.
What I wanted to try was connecting a BBC Micro keyboard to a modern computer. The BBC Micro’s big day was in the ‘80s, but we had one at home in the 90s, and they’d be at the back of classrooms sometimes. I remember bringing home some ‘learn to code on the micro’ books from the library, so logically this is what I actually first coded on (even if I don’t think I got that far).
I’m following in the footsteps of people who know what they’re doing, and the mechanics of how the keyboard works are explained in this blog post, and this blog post had a better diagram of exactly what the keyboard cable is connected to.
What I did differently was use a Raspberry Pico microcontroller - which could be programmed in a form of Python, so I don’t have to learn too many things at once to try and make use of it. I bought a broken BBC Micro off ebay and started learning and experimenting.
My final code is in a github repo (more useful for reference purposes than something you can just pick up and use obviously).
Here are some things I learned along the way:
Mechanically a keyboard is a matrix, where keys are arranged into rows and columns. By activating a row and a column, you can then check if that key is currently being pressed. This means rather than a connection for each key, you just need the necessary connections for activating rows and columns, and getting a signal back.
The BBC Micro keyboard has a few extra complications. The break key is off the matrix with its own connection because originally this would have triggered a reset directly. More complicatedly, to take some of the load off the CPU, the keyboard scanning is handled by a hardware component. Triggered by a regular on-off signal (clock) from the main machine, this cycles through every key to see if it has been pushed. If any key has been pushed, it sends a signal back, which would have interrupted the main CPU, which would then have done a software scan to figure out which key was being pressed. Technically I didn’t have to implement the hardware clock at all and the pico could just run the software scan all the time, but it seemed a good challenge to do both.
This explains all the connections coming out of the keyboard. There are 3 wires for the 3 LEDs (Caps lock, Shift lock, and Cassette Motor). 4 wires to set the column (up to 16 in binary), 3 wires for the rows , 1 for the clock pulse, 1 for the break button, one for the output of the hardware scan, one for the output of the software scan, one to turn the hardware scan on and off, and the 5v and ground wires.
To connect these to the Pico, I used a breadboard, leading to a bit of a jungle of wires. This connects most of the wires to the GPIO (general purpose input output) pins, which can either be set ‘high’ or ‘low’ to send a signal, or can read if a ‘high’ or ‘low’ signal is being sent from the other end. The pico is running Circuit Python, which has libraries to help the pico act as an interface device like a keyboard or mouse. This meant most of the really fiddly stuff was handled for me, and I just had to write a script that connected the signals to and from the physical with how the keys should be being pressed and released on a normal keyboard. The video below shows one of my early attempts to control both the status lights and get input from the break button.
The final circuit python script has a main loop which implements the 1 MHZ clock, controlling the hardware scan. If a key is pressed, the pico detects the hardware scan output going high, turns off the hardware scan, and starts a software scan to find out which key it was. I got stuck here for a while, but eventually figured out a clock pulse (something that would have kept going independently anyway if it was controlled by hardware) is needed to set the column after the value is changed. Adding that in successfully gave me back a column and row position for any currently pressed keys.
Then came the really boring bit, mapping each key to the correct value. For most this is simple enough (you press A, set the map for the row and column to `Keycode.A`). However, the Micro keyboard is by modern standards non-standard. It has a separate @ key, and different options for a number of the shift options of a key. There is no backspace or alt key, a COPY key, and a shift lock button.
Because the adafruit package I’m using to act as a keyboard is a US-keyboard, I decided to go with a fairly basic approach to solving this problem, where for certain keys it will pretend the shift key is down, and for others it will pretend a completely different key is pressed if the shift is down. Technically you could do a nicer key mapping using adafruit’s option for other keyboard layouts, but it works, so it’s good enough. I haven’t implemented shift lock (it’s the alt key), and I’ve set ‘Copy’ to Windows/Control. The keyboard interface knows the system state of the caps lock, and so there’s a stage in the loop where it checks that the Caps lock LED is in sync with that. The break key is outside the matrix grid, and so it has its own button handling that acts as a backspace.
The one remaining thing I had to learn about was ‘debouncing’. When you hold a key down, it waits a certain amount of time before it says you’ve pressed it again, but then it assumes you want to press the button lots (especially useful on backspace). I added a small queue that stops a key-press being triggered for the same key too quickly, but this interval then drops if the key is still being held down.
Once this was all working, I needed to get my jungle of wires into something a bit more robust. I considered learning how to use some circuit drawing software and getting a pcb made, but seemed inefficient for a one off.
Instead, I had to relearn how to solder, and make a little board that could take the ribbon cable from the keyboard, and map the inputs to the pins of a pico. After eventually getting the hang of how soldering worked, on my third attempt to make a connector I managed to correctly wire a little board to do the work. After plugging this in, all I had to do was adjust the mapping as I’d moved some of the connections I was using around.
At the end of this is a perfectly functional keyboard, that plugged into a modern machine works exactly as you’d want. Now I needed something to plug it into.
Technically I could get another pico to emulate the bbc micro, and then plug my pico into it. This might be fun at some point, but it looks a bit fiddly. Instead I’m using an old Raspberry PI 3, which can fit in the bbc micro case, and run linux, so is a bit more multifunctional. This is annoyingly just not quite fast enough to run a bbc micro emulator in the browser. It works for light browsing, but can also run other emulators - which means I can play the DOS games of the 90s, happily on a 2015 mini-computer, inside the case of an 80s computer. This isn’t the easiest way to do that, but I did learn a lot, and isn’t that fun?
]]>Since the last time I tried it a few years ago, google translate lets you download a pdf of the translation of a pdf (preserving the page number). Also machine translation seems a lot better than it used to be, the scope of what’s an easily understandable source just keeps expanding.
Been trying using mermaid charts for a project, and it’s pretty great. Would be nice to wrap it in something like altair_saver and be able to fit it into my notebook workflows.
ggplots in R has a better set of functions for slightly offsetting overlapping points so you get a sense that a lot of points are at 0,0.
Altair in Python doesn’t have a way of doing this, and I found this stackoverflow answer that did most of the needed bits. I’ve adjusted it so the random offsets can be negative and the process repeats until the minimum offset value is reached for all points. I wasn’t sure about how some of the numpy bits were working, so I’ve made some of the coments more explicit.
from scipy.spatial.distance import pdist
import numpy as np
import pandas as pd
def jitter_df(
df: pd.DataFrame,
cols: List[str],
threshold: float = 0.2,
jitter: float = 0.1
) -> pd.DataFrame:
"""
Stops overlap in plotted graphs by moving apart overlapped values
in specified cols.
extends answer from https://stackoverflow.com/a/58772101
"""
n = len(df)
while True:
# calculate distance matrix for specified columns
p = pdist(df[cols])
# the distance matrix will contain duplicate values (A,B and B,A)
# this lets us just get one set, the upper triangle
i, j = np.triu_indices(n, 1)
# Initialize a mask of False
too_close = np.zeros(n, bool)
# in-place operation
# for indices (i), check if distance (p) is below threshold
# and update mask (too_close) at same place
np.logical_or.at(too_close, i, p <= threshold)
overlap_count = too_close.sum()
if overlap_count == 0:
# we're done, escape
return df
# random offset either side of 0
shape = (overlap_count, len(cols))
rng = (np.random.rand(*shape) * jitter) - (jitter / 2)
# apply offset to items that are too close
df.loc[too_close, cols] += rng
This week in mySociety work, we published a blog post about Freedom of Information being good.
There’s a link in that to a good study looking at the connection between corruption and freedom of information, which makes the point that the mixed evidence base is because of overlapping effects. Increasing transparency has an immediate effect of increasing the detection and successful prosecution of corruption, while in the longer term decreasing the probability of corruption. If you look at this wrong though, you get the very counter-intuitive finding (which that paper points out is “in contrast with the most straightforward economic theories of crime”) that more transparency increases corruption.
My general sense is that backfire (when something doesn’t just not work, but has the opposite of the intended effect), is rare, but is such a good story it gets through our guards a bit. It’s certainly gotten through mine before and I wrote a blog post a few years ago about my lost confidence in backfire research around correcting people’s information. Similarly the intuitive counter-intuitive finding that “scared straight” programmes increase crime doesn’t seem to hold up, and the evidence should probably be best described as “no effect” (maybe it sometimes makes things worse, sometimes makes things better). It’s much more likely that something just doesn’t work than manages to do the opposite of what you want.
Taking this too far leaves you blind to “are you sure you’re not making things worse?” questions, but a clearer sense of when counter-intuitive effects are suspect helps make that discussion sharper.
I’m writing something complicated at work, so I didn’t want to open up the new sections too much in my head. Instead I have been slotting in a few paragraphs incorporating things that have been published more recently.
When you write something for years, new stuff comes out. This is in principle good news, because more information makes you more right, either because it nicely fits into your existing argument (validating it), or because it doesn’t and now you can take out the wrong thing and don’t look stupid.
Generally, I’m getting to the point where new publications are not surprising, and nicely fit into the existing argument. Did have a bit of a shock last year though when the new version of the Google Ngram data changed the interpretation, which then needed some thought about some extra ways of validating the approach that are less likely to be upset by future data. The new version of the argument is slightly less sharp, but should be more defendable over time.
One of the fears about getting to the end is locking in major errors. About five years ago I carved out a section and polished it into its own essay for a competition. It only got as far as being long listed, but I’d feel really stupid if it had actually got published because I found a few new sources (one to be fair published later that year) that substantially questioned my core narrative of the historical process I was describing. Writing such a cross-disciplinary book makes it feel especially vulnerable to this.
At some point, you have to stop writing and start being wrong though.
]]>