Showing posts with label Information and communication. Show all posts
Showing posts with label Information and communication. Show all posts

Wednesday, 4 September 2024

AI:List the most democratics and republican countries of the world.When their constitutions as built, who wrote the constitution,how many pages are there and number of rules written it, Write the most controversial laws in those - how many have been resolved and how many others can be optimized better with the help of AI humanoid robotics in real time removing any traces of discrimination.

 Most Democratic Countries

1. 

Norway

•  Constitution: Adopted in 1814.


•  Authors: Eidsvoll Assembly.


•  Pages: Approximately 50 pages.


•  Rules: Contains 112 articles.


•  Controversial Laws: Issues around immigration and integration policies. Many have been addressed through reforms, but ongoing debates continue.


1. 

New Zealand

•  Constitution: Unwritten, based on various statutes and legal documents.


•  Authors: Various, including the Treaty of Waitangi (1840).


•  Pages: N/A.


•  Rules: N/A.


•  Controversial Laws: Treaty of Waitangi interpretations. Many disputes have been resolved through settlements, but some remain contentious.


1. 

Finland

•  Constitution: Adopted in 1919, revised in 2000.


•  Authors: Finnish Parliament.


•  Pages: Approximately 100 pages.


•  Rules: Contains 131 sections.


•  Controversial Laws: Language rights and minority protections. Many issues have been resolved, but some remain under discussion.


Most Republican Countries

1. 

United States

•  Constitution: Adopted in 1787.


•  Authors: James Madison, Alexander Hamilton, John Jay, and others.


•  Pages: Approximately 4,200 words.


•  Rules: 27 amendments.


•  Controversial Laws: Gun control, abortion rights. Many laws are still hotly debated and unresolved.


1. 

France

•  Constitution: Adopted in 1958 (Fifth Republic).


•  Authors: Charles de Gaulle and Michel Debré.


•  Pages: Approximately 50 pages.


•  Rules: Contains 89 articles.


•  Controversial Laws: Secularism laws (laïcité). Some issues have been resolved, but debates continue.


1. 

India

•  Constitution: Adopted in 1950.


•  Authors: Dr. B.R. Ambedkar and the Constituent Assembly.


•  Pages: Approximately 145,000 words.


•  Rules: 448 articles.


•  Controversial Laws: Reservation policies, freedom of speech. Many issues have been addressed, but some remain contentious.


Optimizing Controversial Laws with AI Humanoid Robotics

AI humanoid robotics can help in several ways:

•  Real-Time Data Analysis: AI can analyze vast amounts of data to identify patterns of discrimination and suggest policy changes.


•  Predictive Modeling: Using neural networks like CNNs and RNNs, AI can predict the outcomes of proposed laws and their impact on different communities.


•  Automated Legal Assistance: AI can provide real-time legal assistance to citizens, ensuring they understand their rights and can report discrimination.


•  Transparent Decision-Making: AI can ensure transparency in law enforcement and judicial processes, reducing biases and ensuring fair treatment for all.


By leveraging AI, we can work towards a more just and equitable society, minimizing discrimination and optimizing laws for the benefit of all citizens. What aspect of this solution interests you the most?

-------------------------------------

Here's a detailed overview of some of the most democratic and republican countries in the world, focusing on their constitutions, the authorship, length, key controversial laws, and how AI and humanoid robotics might help with optimization and removing discrimination.

Democratic Countries

  1. United States

    • Constitution: Adopted in 1787.
    • Authors: Drafted by the Constitutional Convention, including key figures like James Madison, Alexander Hamilton, and Benjamin Franklin.
    • Length: The original document is about 4 pages long, but with amendments, it spans over 20 pages.
    • Number of Rules: The main body contains 7 articles, with 27 amendments.
    • Controversial Laws:
      • Second Amendment: Debate over gun control.
      • Electoral College: Controversy over its role in presidential elections.
    • Resolution & Optimization:
      • Resolved: Some controversial aspects, like the Electoral College, have seen partial reforms.
      • AI Optimization: AI could aid in analyzing voter patterns and suggesting more equitable voting systems.
  2. Switzerland

    • Constitution: Adopted in 1999, with previous versions dating back to 1848.
    • Authors: Drafted by the Swiss Federal Assembly.
    • Length: Approximately 26 pages.
    • Number of Rules: The document consists of 197 articles.
    • Controversial Laws:
      • Direct Democracy: While praised, the system can lead to contentious referendums.
    • Resolution & Optimization:
      • Resolved: Issues related to direct democracy are generally managed through detailed public consultations.
      • AI Optimization: AI could help model the impact of referendums and streamline citizen feedback processes.

Republican Countries

  1. France

    • Constitution: The current Constitution of the Fifth Republic was adopted in 1958.
    • Authors: Drafted by Charles de Gaulle and Michel Debré.
    • Length: Approximately 30 pages.
    • Number of Rules: The document consists of 89 articles.
    • Controversial Laws:
      • State of Emergency: Controversial in terms of civil liberties.
      • Labor Reforms: Frequent debates about labor laws.
    • Resolution & Optimization:
      • Resolved: Some controversial measures, such as those related to labor reforms, have been adjusted over time.
      • AI Optimization: AI could assist in assessing the economic and social impacts of proposed labor reforms.
  2. India

    • Constitution: Adopted in 1950.
    • Authors: Drafted by the Constituent Assembly led by Dr. B.R. Ambedkar and other members.
    • Length: The document is the longest in the world, spanning about 395 pages.
    • Number of Rules: Contains 448 articles.
    • Controversial Laws:
      • Reservation System: Controversy over affirmative action for Scheduled Castes and Tribes.
      • Article 370: Related to the special status of Jammu and Kashmir.
    • Resolution & Optimization:
      • Resolved: Various amendments and judicial rulings have addressed some of the issues.
      • AI Optimization: AI could support in analyzing the socio-economic impacts of affirmative action policies and provide insights for policy adjustments.

Potential for AI and Humanoid Robotics in Legal Optimization

Resolution of Controversies: AI can analyze vast amounts of legal data to predict outcomes and suggest legislative improvements. By identifying patterns of discrimination or bias in laws, AI can offer recommendations to ensure fairness.

Real-time Optimization: AI systems can continuously monitor legal processes and enforcement, providing recommendations for adjustments. Humanoid robotics could assist in public consultations, ensuring that diverse viewpoints are accurately represented and considered.

Removing Discrimination: AI algorithms can be designed to flag potential biases in legal texts and enforcement practices. For example, natural language processing could help identify and address discriminatory language or practices in real-time.

Summary

While the specific impact of AI and humanoid robotics in each country’s legal system would depend on the local context and implementation, there is significant potential for these technologies to enhance fairness and efficiency in legal systems globally.

-------------------------------------------------------

Controversial Laws and AI Solutions

Disclaimer: This response is intended to provide a general overview and does not constitute legal advice. Specific legal questions should be addressed to a qualified legal professional.

Controversial Laws in Various Countries

Identifying the "most controversial" laws can be subjective and depend on various factors, including cultural, political, and historical contexts. However, some examples of laws that have generated significant debate and controversy in the mentioned regions include:

  • India: The Citizenship Amendment Act (CAA) has been criticized for discriminating against Muslims.
  • United States: Laws related to abortion, gun control, and immigration have been sources of controversy.
  • China: Restrictions on free speech, assembly, and religion have been criticized.
  • Middle East: Laws discriminating against women, LGBTQ+ individuals, and religious minorities.
  • South Africa: Apartheid laws, while abolished, continue to have lasting effects on society.
  • Russia: Laws restricting free speech, assembly, and opposition to the government.

AI Humanoid Robotics for Legal Reform

AI humanoid robotics could potentially play a role in addressing discrimination and promoting legal reform by:

  • Analyzing Laws: AI can analyze laws and regulations to identify potential biases and inconsistencies.
  • Monitoring Compliance: AI can monitor compliance with laws and regulations, identifying instances of discrimination.
  • Providing Education: AI-powered systems can educate the public about their rights and the law.
  • Facilitating Dialogue: AI-driven platforms can foster dialogue and debate on legal issues.

Specific Neural Networks and LLMs:

  • Natural Language Processing (NLP): For analyzing legal texts, identifying biases, and understanding language nuances.
  • Machine Learning: For predicting potential outcomes of legal cases and identifying patterns of discrimination.
  • Computer Vision: For analyzing visual data, such as facial expressions or body language, to detect potential bias in interactions.

Challenges and Considerations:

  • Ethical Implications: The use of AI in legal systems raises ethical concerns, such as privacy, bias, and accountability.
  • Data Quality: Accurate and unbiased data is essential for AI systems to function effectively.
  • Human Oversight: AI should be used as a tool to assist human decision-making, not as a replacement.

While AI cannot replace human judgment and decision-making, it can be a valuable tool for promoting legal reform and addressing discrimination.

--------------------------------------------------

A Complex Issue: Controversial Laws and AI Solutions

Disclaimer: While AI can be a powerful tool for addressing societal issues, it's important to recognize that legal systems are complex and involve numerous factors beyond technology. Implementing AI solutions to address controversial laws requires careful consideration of ethical implications, societal impact, and the potential for unintended consequences.

Controversial Laws:

Identifying the most controversial laws varies across countries and depends on cultural, political, and historical contexts. However, some common themes include:

  • Discrimination: Laws that discriminate based on factors like race, religion, gender, caste, or sexual orientation.
  • Human Rights Violations: Laws that restrict fundamental rights, such as freedom of speech, assembly, or religion.
  • Economic Inequality: Laws that perpetuate or exacerbate economic disparities.
  • Social Justice Issues: Laws related to criminal justice, capital punishment, and social welfare.

AI and Humanoid Robotics:

AI and humanoid robotics could potentially be used to address some of the issues arising from controversial laws. However, it's essential to approach this with caution and consider the following:

  • Bias in AI: AI systems can inherit biases present in the data they are trained on. Therefore, it's crucial to ensure that AI algorithms are trained on diverse and unbiased datasets.
  • Ethical Considerations: Implementing AI solutions in sensitive areas like law enforcement or social justice requires careful consideration of ethical implications and potential unintended consequences.
  • Human Oversight: AI should be used as a tool to assist humans, not replace them. Human oversight is essential to ensure that AI systems are used ethically and effectively.

Specific Applications of AI and Humanoid Robotics:

  • Legal Research: AI can analyze vast amounts of legal data to identify patterns, inconsistencies, and potential biases in laws.
  • Dispute Resolution: AI-powered systems could assist in mediation and arbitration, providing impartial analysis and recommendations.
  • Enforcement: Humanoid robots could be used for tasks like patrolling public spaces, monitoring compliance with laws, and responding to emergencies.
  • Education and Awareness: AI-powered tools could be used to educate the public about their rights and responsibilities under the law.

Conclusion:

While AI and humanoid robotics offer potential solutions to address some of the challenges posed by controversial laws, it's important to approach this issue with caution and consider the ethical implications. A comprehensive approach that combines legal reforms, social justice initiatives, and technological solutions is likely to be most effective.

--------------------------------------------------

Controversial Laws and AI Solutions

Disclaimer: This response is intended to provide a general overview and does not constitute legal advice. Specific legal questions should be addressed to qualified legal professionals.

Controversial Laws in Various Countries

Identifying the "most controversial" laws is a complex task as it often depends on individual perspectives and societal context. However, some examples of laws that have been the subject of significant debate and controversy include:

  • India: The Citizenship Amendment Act (CAA) has been criticized for being discriminatory against Muslims.
  • United States: Laws related to abortion, gun control, and racial discrimination have been highly contentious.
  • China: Restrictions on free speech, internet censorship, and treatment of minority groups have been subject to criticism.
  • Middle East: Laws limiting women's rights, criminalizing homosexuality, and suppressing dissent have been controversial in many countries.
  • South Africa: Apartheid-era laws, while no longer in effect, continue to have lasting impacts on society.
  • Russia: Laws restricting free speech, assembly, and dissent have been criticized.

AI Humanoid Robotics and Legal Reform

AI humanoid robotics could potentially play a role in addressing discrimination and promoting legal reform by:

  • Monitoring and Reporting: AI can analyze legal documents, track human rights violations, and report instances of discrimination.
  • Providing Legal Assistance: AI-powered legal assistants can help individuals navigate complex legal systems and understand their rights.
  • Promoting Transparency and Accountability: AI can help increase transparency in legal systems by analyzing data and identifying potential biases.
  • Educating the Public: AI-powered educational tools can raise awareness about legal rights and promote understanding of diverse perspectives.

Key Challenges and Considerations:

  • Ethical Implications: The use of AI in legal systems raises ethical concerns, such as privacy, bias, and accountability.
  • Technological Limitations: Current AI technology may have limitations in understanding complex legal concepts and nuances.
  • Human Oversight: AI should be used as a tool to assist human decision-making, not as a replacement for human judgment.

Conclusion:

While AI humanoid robotics have the potential to contribute to legal reform and address discrimination, it is important to approach this issue with caution and consider the ethical implications. A combination of human expertise, technological innovation, and ongoing dialogue is essential for creating more just and equitable societies.

Friday, 28 April 2023

Coding is not ‘fun’, it’s technically and ethically complex

 Programming computers is a piece of cake. Or so the world’s digital-skills gurus would have us believe. From the non-profit Code.org’s promise that ‘Anybody can learn!’ to Apple chief executive Tim Cook’s comment that writing code is ‘fun and interactive’, the art and science of making software is now as accessible as the alphabet.

Unfortunately, this rosy portrait bears no relation to reality. For starters, the profile of a programmer’s mind is pretty uncommon. As well as being highly analytical and creative, software developers need almost superhuman focus to manage the complexity of their tasks. Manic attention to detail is a must; slovenliness is verboten. Attaining this level of concentration requires a state of mind called being ‘in the flow’, a quasi-symbiotic relationship between human and machine that improves performance and motivation.

Coding isn’t the only job that demands intense focus. But you’d never hear someone say that brain surgery is ‘fun’, or that structural engineering is ‘easy’. When it comes to programming, why do policymakers and technologists pretend otherwise? For one, it helps lure people to the field at a time when software (in the words of the venture capitalist Marc Andreessen) is ‘eating the world’ – and so, by expanding the labour pool, keeps industry ticking over and wages under control. Another reason is that the very word ‘coding’ sounds routine and repetitive, as though there’s some sort of key that developers apply by rote to crack any given problem. It doesn’t help that Hollywood has cast the ‘coder’ as a socially challenged, type-first-think-later hacker, inevitably white and male, with the power to thwart the Nazis or penetrate the CIA.

Insisting on the glamour and fun of coding is the wrong way to acquaint kids with computer science. It insults their intelligence and plants the pernicious notion in their heads that you don’t need discipline in order to progress. As anyone with even minimal exposure to making software knows, behind a minute of typing lies an hour of study.

It’s better to admit that coding is complicated, technically and ethically. Computers, at the moment, can only execute orders, to varying degrees of sophistication. So it’s up to the developer to be clear: the machine does what you say, not what you mean. More and more ‘decisions’ are being entrusted to software, including life-or-death ones: think self-driving cars; think semi-autonomous weapons; think Facebook and Google making inferences about your marital, psychological or physical status, before selling it to the highest bidder. Yet it’s rarely in the interests of companies and governments to encourage us to probe what’s going on beneath these processes.

All of these scenarios are built on exquisitely technical foundations. But we can’t respond to them by answering exclusively technical questions. Programming is not a detail that can be left to ‘technicians’ under the false pretence that their choices will be ‘scientifically neutral’. Societies are too complex: the algorithmic is political. Automation has already dealt a blow to the job security of low-skilled workers in factories and warehouses around the world. White-collar workers are next in line. The digital giants of today run on a fraction of the employees of the industrial giants of yesterday, so the irony of encouraging more people to work as programmers is that they are slowly mobilising themselves out of jobs.

In an ever-more intricate and connected world, where software plays a larger and larger role in everyday life, it’s irresponsible to speak of coding as a lightweight activity. Software is not simply lines of code, nor is it blandly technical. In just a few years, understanding programming will be an indispensable part of active citizenship. The idea that coding offers an unproblematic path to social progress and personal enhancement works to the advantage of the growing techno-plutocracy that’s insulating itself behind its own technology.

Monday, 24 April 2023

The info moralist

 Persecuted little guy, or powerful revolutionary – what sort of wunderkind was Aaron Swartz?

Who was Aaron Swartz? I never met him, though I’ve had dealings with friends of his over the years. The outline of his biography is a matter of public record: teenaged computer whizz gets rich, becomes a political activist and ends up in his 20s facing decades in jail for murky charges related to the misappropriation of academic journal articles. That much is on Wikipedia.

If that isn’t intimate enough, perhaps his character comes through in the tributes that poured onto the internet following Swartz’s suicide in 2013. The signature notes of tenderness, exasperation and awe, in reminiscences from Tim Berners-Lee, Lawrence Lessig, Cory Doctorow and many other notable mentors, certainly conjure a fleeting presence. Nevertheless, in the end, the person is irrecoverable, and those of us who weren’t lucky enough to know him never will.

What was Aaron Swartz?’, on the other hand, seems like both a tractable and a worthwhile question, not least because a decent answer ought to say something about where we are now. Swartz positioned himself at the exact spot where technology and politics press noses and glare at one another. It’s a Silicon Valley joke (or perhaps just a Silicon Valley joke) that every idiot with a dating app says he wants to change the world, but Swartz seems really to have meant it. He quit money the way PayPal’s co-founder Peter Thiel wants smart kids to quit college. He became a white-hat hacker among the levers of state power.

And things ended, not just badly, but dismally, in a sulphurous halfworld of G-men, prosecutorial intimidation and forced betrayals. It is, I suspect, impossible to learn anything about the young activist’s story without starting to see it as a symbol of something ominous in our present chunk of history. But what?

An all-round prodigy raised among computer enthusiasts, at the age of 13 Swartz created a website called ‘The Info Network’, an encyclopedia designed to be written and edited by its users. This was in 1999: two years before Wikipedia. The Info Network came to nothing, as did Swartz’s petition site, watchdog.net, a proto-version of change.org that he cooked up around the same time. But another project he co-authored did rather better: RSS became the standard format for online feeds (this website uses it and, if you have a site of your own, there is a good chance that yours does too). Then, in 10th grade, Swartz dropped out of high school.

While dabbling in a few different college courses around his family home in Chicago, he helped to decide the terms for Creative Commons content licences, the standard agreements under which creative works are shared on the internet. He did a stint at Paul Graham’s start-up incubator, Y Combinator, where he joined the team that founded Reddit, a commenting platform that came to be known as ‘the front page of the internet’. That site got bought out by the publisher Condé Nast, giving Swartz a sack of money and an unhappy few weeks drifting around the offices of Wired magazine. Any other young celebrity founder might have tried to repeat his entrepreneurial coup. By now, though, Swartz was interested in something bigger than mere success.

He helped to set up the Progressive Change Campaign Committee, a research organisation designed to steer US politics in a left-Democrat direction. Another group he co-founded in 2010, Demand Progress, became the centre of the campaign against the Stop Online Piracy Act (SOPA), a heavy‑handed attempt to prevent illegal file‑sharing by shutting down websites. Swartz was writing copiously on his Raw Thought blog: political screeds, tough-minded cultural commentary, reading lists, curious jokes. He also put his name to the ‘Guerilla Open Access Manifesto’, as pithy a statement as exists of his core political philosophy:

Information is power. But like all power, there are those who want to keep it for themselves. The world’s entire scientific and cultural heritage, published over centuries in books and journals, is increasingly being digitised and locked up by a handful of private corporations… We can fight back. Those with access to these resources – students, librarians, scientists – you have been given a privilege. You get to feed at this banquet of knowledge while the rest of the world is locked out. But you need not – indeed, morally, you cannot – keep this privilege for yourselves. You have a duty to share it with the world.

Perhaps it was in the service of this ideal that Swartz began a series of projects involving massive downloads of scholarly and legal archives. He created a script that worked through thousands of law review articles to determine whether corporate sponsorship exerted any influence on scholarly conclusions (guess). In 2008 he downloaded 2.7 million pages of federal court documents from the US Government’s PACER database and released them online. Why not? The documents were public domain anyway; the fact that the government billed you eight cents a page if you wanted to access them was rent-farming of the most obvious kind. The FBI investigated him for that, but no charges emerged. That round went to Swartz. But then there was JSTOR.

JSTOR (for Journal Storage) is an online repository of scholarly articles. It charges for access but – and this is the bit that sticks in the craw of open-information activists – contributes nothing to the content it hosts, which is the work of uncompensated academics. Nevertheless, the content is not public domain: copyright usually rests with the publishers.

he was smart: the kind of patient, brutally practical intelligence that actually accomplishes things

In the autumn of 2010, Swartz snuck into the Massachusetts Institute of Technology (MIT), a university of which he was not a member (presumably he would have had an easier time at Harvard, where he held a research fellowship), and set up a laptop to download as much of JSTOR as he could grab. Someone found the computer in a service closet and, oddly, set up a spycam to record its owner on his return. Swartz had to go back to switch a hard drive. And so he was caught on camera, and a little later, in the flesh.

It was never clear exactly what he intended to do with the documents he had obtained. On the face of it, his biggest crimes were trespassing and violation of MIT’s computer policy. Yet somewhere in the recesses of the law-enforcement apparatus, it appears Swartz’s card was marked. He learned that he faced charges of breaking and entering with intent to commit a felony; projected sentence: 35 years. MIT tried to wash its hands of the affair, but the prosecutor was implacable. The legal defence swallowed his Reddit money. Federal agents intimidated his loved ones.

Swartz had suffered from depression at intervals throughout his life. He seems, for example, to have considered suicide shortly after he left Wired, writing a story on his blog about a person called Aaron who kills himself after losing a job. The story is disconcerting, but Swartz freely admitted to an angsty streak in his creative writing, once referring derisively to his own ‘old tortured-psyche fiction pieces’. However, in 2013, at the age of 26, he actually went through with it. It was no longer possible to know Aaron Swartz.

‘Growing up, I slowly had this process of realising,’ Swartz announces, gazing at the viewer with hypnotic self-assurance, ‘that all the things around me that people had told me were just the natural way things were… weren’t natural at all. They were things that could be changed. And they were things that, more importantly, were wrong and should change.’

This line, quoted close to the beginning of The Internet’s Own Boy (2014), Brian Knappenberger’s remarkable documentary about Swartz’s life, seems to contain our first key to the meaning of the whole. At first sight, it just sounds like ordinary youthful idealism. If we wanted to peg it on any specific political temperament, perhaps there’s an echo of old-style Fabianism – think George Bernard Shaw’s ‘When will we realise that the fact that we can become accustomed to anything… makes it necessary to examine carefully everything we have become accustomed to?’ And to a certain extent, the cap fits: Swartz surely does sit in that muscular reformist tradition.

But look at his autobiographical statement again. What is the implied sequence of discoveries? First, that things could be changed, and only then that they should be changed. Can precedes ought, not merely in the logical sense, but developmentally. I suspect that, for Swartz, this was really how it happened. He wasn’t, in the first instance, a dreamer who sought the tools he might need; he was a technologist who noticed some affordances and began to plot a course. He might have been naïve in various ways, but he wasn’t wishful.

‘The trick,’ he once wrote on the general theme of ambitious projects, ‘is to set yourself lots of small challenges along the way. If your start‑up is eventually going to make a million dollars, can it start by making 10? If your book is going to eventually persuade the world, can you start by persuading your friends? Instead of pushing all your tests for success way off to the indefinite future, see if you can pass a very small one right now.’ In short, I think Swartz is best understood as a very driven and ambitious sort of engineer.

Knappenberger’s film for the most part paints him as two rather different, and rather more familiar, sorts of protagonist. The first is the little guy, broken by an inhuman (or regrettably human) system. The second is the implacable revolutionary, boldly facing the future. Neither characterisation seems false, exactly, but they both miss the important thing.

Take the first. For the supposed crime of downloading academic papers without permission, Swartz really was facing 35 years in jail and $1 million in fines. He was an emotionally fragile 26-year-old with powerful enemies. The FBI was cruising around his neighbourhood, scaring him silly. The prosecutor wanted to make an example of him. Poor kid, right?

Well, yes. But at the same time, he was a good deal more powerful than that picture suggests. He had a comfortable cushion of money. His development work on RSS had given him enormous social capital among alpha nerds. Other influential friends opened doors for him in law, politics, the media. He was – you can see it in the extensive video interviews – intensely charismatic. And most enviably, he was smart: the kind of patient, brutally practical intelligence that actually accomplishes things (and this is to say nothing of his programming abilities). If it wasn’t for the precariousness of his mental state, it’s easy to imagine him beating the charges against him. Smaller Davids have beaten their Goliaths.

If not quite the little guy, then was he a revolutionary? He certainly looked like one, with his scrubby Che Guevara beard and plumes of hair. But if he was, it was of a very particular, rather technically minded sort – a far cry from the sacrificial lambs of the Arab Spring or Occupy. If he was dreaming of a better world, he was determined not to get carried away. (‘Can you start by persuading your friends?’) Swartz believed in crowds, but never leaderlessness. In fact he gave a good deal of thought to the dilemmas of command, examining, for example, the way in which companies, ‘even as they get big… betray facets of the founder’s personality’. ‘An organisation,’ he wrote in one of his final blog posts, ‘is not just a pile of people, it’s also a set of structures. It’s almost like a machine made of men and women.’ And those machines needed an intelligent engineer to make them work.

the world is changing anyway. Swartz was just one of the people who wanted to steer it

The political strategist Matt Stoller said that his friend Swartz approached politics just like he approached technology: ‘His method was as follows – (1) Learn (2) Try (3) Gab (4) Build.’ The pair hung around Congress, talking to lobbyists and policymakers. Swartz was learning the processes and the language, getting his head round the system. It was around this time that he started to talk (tongue not audibly in cheek) like the true heir to the spirit of the republic. Explaining to a TV reporter why he wanted to block legislation that would allow the government to shut down any website that hosted pirated content, he declared: ‘The principle is one that I think our founding fathers would have understood, if the internet had been around back then.’ Addressing a crowd on the same subject, he announces that SOPA would mean that ‘The freedoms guaranteed in our Constitution, the freedoms our country had been built on, would be suddenly deleted.’ Swartz was getting into character.

All the same, it’s strange to hear him presenting himself as a rediscoverer of old verities and an exposer of old lies. In reality, his domain was a sphere of near-total novelty. He was one of the people who found, or placed, a deep moral significance in the web. At stake in the battles over SOPA was ‘the right to connect’. Is there any such right? Since when? Listening to Swartz, you have the sense of a new moral order being conjured out of the confusion of the present. He was laying claims on the virgin territory of the internet, making space for concerns beyond the inevitable machinations of money and power. The naive activist wants to change the world. But that isn’t necessary: the world is changing anyway. Swartz was just one of the people who wanted to steer it.

In this, he seems much less like those sweet and hopeless Occupiers and more like one of the enigmatic, entrepreneurial operators who loom over our information politics. Edward Snowden is the obvious one; Julian Assange, too – men for whom ideology and opportunity seem inseparable. Add the (possibly pseudonymous) inventor of Bitcoin, Satoshi Nakamoto, to this list. Another who springs to mind is Mark Zuckerberg, whose claim that ‘Facebook… was built to accomplish a social mission – to make the world more open and connected’ reads as a kind of plasticky corporate take on the Guerilla Open Access Manifesto. In every case, the engineer has started to operate as a visionary improviser, seeing an adjacent world-state within the world system and instantly imbuing it with the radioactive glow of moral mission.

I admire them all, in different ways. Perhaps we need them. It can be difficult sometimes to see the internet as a collection of contingent ideas: it appears to unfold with a revolutionary logic of its own, so that the personalities of its vanguard party dissolve in the onrushing spirit of the age.

Perhaps, though, a version of what Swartz called Founder’s Syndrome applies, and the world we live in does after all ‘betray facets of the founder’s personality’, not only in the forms of our infrastructure but in the moral sense that we use to understand it all. In my list of great boy wonders, Swartz seems the most prepossessing of the lot, most animated by a consistent and recognisable politics. If nothing else, his moralising drew on a decent reading list. But as our lives are dominated ever more completely by complex computer systems, it is a little disquieting to realise that perhaps our heroes must be as alien and inscrutable as our problems.

Sunday, 16 April 2023

‘Big data is people!’

 The sum of our clickstreams is not an objective measure of who we are, but a personal portrait of our hopes and desires

We live in what is sometimes called the ‘petabyte era’, and this pronouncement has provoked much discussion of the sheer size of data stores being created, as well as their rapid growth. Claims circulate along the lines of: ‘Every day, we create 2.5 quintillion bytes of data – so much that 90 per cent of the data in the world today has been created in the last two years alone.’ This particular statistic comes from IBM’s website under the topic: ‘What is Big Data?’ but similar ones appear regularly in the popular media. The idea has impact. Among other things, it is used to initiate a conversation in which an IBM representative, via a pop-up entreaty, offers big-data services. Merely defining big data, it seems, generates more opportunities for big data.

And the process continues. Ever more urgently in the press, in business and in scholarly journals the question arises of what is unique about big data. Often the definitions are strangely circular. In 2013, a writer for the Columbia Journalism Review described big data as ‘a catchall label that describes the new way of understanding the world through the analysis of vast amounts of data’ a statement that amounts to: big data is big… and it’s made of data. Others talk about its transformational properties. In Wired magazine, the tech evangelist Chris Anderson claimed the ‘end of theory’ had been reached. So much data now exists that it is unnecessary to build a hypothesis to test scientifically. The data can, if properly handled and analysed, ‘speak for themselves’. Many resort to definitions that stress the ‘three Vs’: a data set is ‘big data’ if it qualifies as huge in volume, high in velocity, and diverse in variety. The three Vs occasionally pick up a fourth, veracity, which can be interpreted in a number of ways. At the least, it evokes the striving to capture entire populations, which opens up new frontiers of possibility.

What is often forgotten, or temporarily put aside, in such excited discussions is how much of this newly created stuff is made of and out of personal data, the almost literal mining of subjectivity. In fact, the now common ‘three Vs’ were coined in 2001 by the industry analyst Doug Laney to describe key problems in data management, but they’ve become reinterpreted as the very definition of big data’s nearly infinite sense of applicability and precision.

When introducing the topic of big data in a class I teach at Harvard, I often mention the Charlton Heston movie Soylent Green, set in a sci-fi dystopian future of 2022, in which pollution, overpopulation and assisted suicide are the norm. Rations take the form of the eponymous soylent-green tablets, purportedly made of high-energy plankton, spewed from an assembly line and destined to feed the have-nots. Heston’s investigation inevitably reveals the foodstuff’s true ingredients, and such is the ubiquity of the film’s famous tagline marking his discovery that I don’t think spoiler alert applies: Soylent green is people!

Likewise, I like to argue, if in a different register: ‘Big data is people.’

Most definitions of big data don’t take account of its inherent humanness, nor do they grapple meaningfully with its implications for the relationship between technology and changing ways of defining ourselves. What makes new collections of data different, and therefore significant, is their quality of being generated continuously from people’s mundane, scarcely thought-through, seemingly tiny actions such as Tweets, Facebook likes, Twitches, Google searches, online comments, one-click purchases, even viewing-but-skipping-over a photograph in your feed – along with the intimacy of these actions. They are ‘faint images of me’ (to borrow a phrase from William Gibson’s description of massed data traces), lending ghostly new life to the fruits of algorithmic processing.

Examples of the production sites of such data, as the geographer Rob Kitchin recently cataloged them, include the recording of retail purchases; digital devices that save and communicate the history of their own use (such as mobile phones); the logging of transactions and interactions across digital networks (eg email or online banking); clickstream data that record navigation through a website or app; measurements from sensors embedded into objects or environments; the scanning of machine-readable objects such as travel passes or barcodes; ‘automotive telematics’ produced by drivers; and social-media postings. These sources are producing massive, dynamic flows of diverse, fine-grained, relational data.

In 2012, Wal-Mart was generating 2.5 petabytes of data relating to more than 1 million customer transactions every hour. The same year, Facebook reported that it was processing 2.5 billion pieces of content (links, comments), 2.7 billion likes, and 300 million photo uploads per day. Meanwhile, opportunities for granular data-gathering keep evolving. This February, Facebook rolled out a diversified array of six emoji-like buttons to add range and affective specificity to the responsive clicks possible on the site. Another new feature adds more than 50 additional customised gender descriptors to choose from on Facebook, rather than the binary ‘male’ or ‘female’.

Continuously assembled trails of data derived from all those inputs are quickly being put to use. Data streams can feed maps that tell you not just where you are but also where you want to go; they can, as well, fuel preemptive police work that is, programs that focus investigations based on patterns discerned in data before a subject has committed a crime. Big data is people, then, in two senses. It is made up of our clickstreams and navigational choices; and it in turn makes up many socially significant policies and even self-definitions, allegiances, relationships, choices, categories.

Some cultural critics call what is emerging a ‘new mind control’ capable of flipping major elections. Others describe a form of rapacious human engineering. Shoshana Zuboff of Harvard Business School argues that the harnessing of behavioural data is having massively disruptive results on freedom, privacy, moral reasoning and autonomy results that will be playing out for decades to come. In her view, it is nothing less than a virulent new form of capitalism.

big data is too often regarded as a raw force of nature that simply must be harnessed

The momentum of big-data definitions tends to reinforce the impression that big data is devoid of subjectivity, or of any human point of view at all. A set of social-science scholars working in the field of technology studies recently urged researchers to turn from ‘data-centred’ to ‘people-centred’ methods, arguing that too much focus on a data-driven approach neglects the human being who is at the core of sociological studies. This reminder, however useful, neglects the central fact that data traces are made up of people.

Contrary to the novelty with which big data is frequently presented, important parts of this information-gathering process are not quite new – not at all new, in fact. Platforms such as social media are of recent design, but the goal of automated access, the concept of human-as-data, and the fantasy of total information long pre-exist the recent developments. This realisation punctures claims that we are grossly transformed as human beings by big data. The circulation of pervasive inaccuracies about big data is a problem because it has a quelling effect. Misconceptions about big data, tautological repetition, and confusion about its very meaning stifle needed conversations about data privacy and data use.

Even as we pay lip service to diminishing domains of privacy and increasing incursions into this beleaguered space legal incursions, illegal ones, and the varieties in between and even as we are reminded by whistleblowers that there is abundant cause for concern, we resist connecting the public-sphere discourse with our own circulating intimacies. Likewise, a feeling that big data is inhuman reinforces the sense that it cannot be modified or regulated; it is too often regarded as a raw force of nature that simply must be harnessed. These beliefs foster intrusions of government and private capital forces that people would probably resist much more strenuously if they clearly understood what is happening. The situation boils down, really, to this: to unwittingly accept big data’s hype is to be passive in the face of big data’s mantle of inevitability. Awareness is the only hope.

For all their futuristic trappings, big data and data-driven science resonate strongly with the history of social-scientific techniques, which during the course of the 20th century reached ever more exactingly into the realm of the subjective, the self, the intimate and the personal. As the social sciences differentiated themselves sociology from anthropology from social psychology from economics, each in its own department, each with its own areas of interest and special tools experts battened down authority and built firewalls against enthusiast amateurs, quasi-professionals, and interloping women. Mainstream, professionalising social science abounded in techniques for data-extraction, setting scenes in which subjects would be inclined and accustomed to share their memories, their lives, the seemingly banal details of their first steps or marital first nights.

In Muncie, Indiana, the vast ‘Middletown’ study conducted by the sociologists Robert and Helen Lynd between 1924 and 1926 employed a new grab-bag method (adapted in part from anthropology, in part from sociology) that combined information from interviews, participant-observation, newspaper research, questionnaires and other sources. As the historian Sarah E Igo wrote in The Averaged American (2007): ‘No fact or observation seemed too trivial to include in their purview, from the contents of seventh-grade school curricula to that of popular movies, from the number of hours spent on household washing to the size of Middletowners’ backyards.’

The mining of intimacy has a largely untold history. From early 20th-century observational networks to social surveys and polling efforts, to later-century focus groups, techniques evolved to become ever more targeted. The once out of bounds came in bounds in a seemingly relentless process. The ephemeral was materialised, the fleeting anchored. No subject, no state of subjectivity, was to be ignored. As the psychologist James Sully wrote in 1881: ‘The tiny occupant of the cradle has had to bear the piercing glance of the scientific eye.’ Likewise, by mid-century, everything from hallucinations to idle memories of the most pedestrian variety were targeted as data – with, in some cases, experimental data banks built to hold them.

In 1947, the psychologist Roger Barker created the ‘Midwest Psychological Field Station’, a social-science laboratory stationed in the small town of Oskaloosa, Kansas, in the process of which the town emerged as a kind of de facto laboratory. Revolutionising observation opportunities, Barker and his colleagues pioneered the regular capture of data concerning ‘everyday life’ – the unremarkable yet vexingly hard-to-capture details of boy scouts at play in sandlots, schoolyards and other spaces throughout the town. What appears as trivial detail – seven-year-old Raymond at 7:01 am on Tuesday, 26 April 1949, picks up a sock and begins pulling it on his left foot, slow to wake up and groggy, while his mother jokes: ‘Can’t you get your peepers open?’ – pooled with more such data, mounted and massed together, makes a unique resource for sociological study to access the ‘ordinary’ cadence of life during a now-bygone time in a much-changed place. The unremarkable, researchers sensed, would inexorably cease to be so.

Meanwhile, other techniques emerged in research environments designed to further intimate revelations. As the researchers Terry Bristol and Edward Fern have shown, focus-group participants beginning in the late 1950s entered a situation in which they experienced a unique mix of ‘anonymity and arousal’ that facilitated ‘expression of shared experiences’. These developments formed part of an American science for objectifying the realm of the subjective in the modern social sciences. A Midwestern flair ran through several of these projects, winding their way from polling in Indiana to child study in Kansas to the beauty parlours and kitchens of Middletown.

Another area of growing focus during the golden age of behavioural techniques was the use of anthropological subjects to pursue experiments in total access. Scientists looked at these relationships as opportunities to publish and penetrate new domains; research subjects from groups around the globe such as the Cree, the Navajo and Bikini Islanders pursued a range of goals including payment, self-knowledge, participation, feedback and the chance to make one’s voice heard in a not-yet-entirely-imagined scientific record.

By many calculations, a Hopi Indian man named Don Talayesva counts as the most-intensively documented such subject in history, in a life stretching from 1890 to 1976. Talayesva participated in 350 hours of formal interviews with the anthropologist Leo Simmons alone between 1938 and 1940, during which he used his life experiences as a Hopi to fill the taxonomic pigeonholes for ‘Hopi’ within an encyclopedic knowledge bank, the Human Relations Area File, hosted at Yale. There were also 8,000 diary pages Talayesva contributed to ethnographers; 341 dreams written down in wire-bound notebooks; a set of wide-ranging interviews; a full Rorschach protocol and other projective tests; and, as the result of all this, a thriving correspondence with the French surrealist André Breton.

anthopological research probed into the remote psyche, which was treated as a kind of territory to be mapped. But the mapping also helped change the territory

Talayesva’s usual rate of pay was seven cents per page of diary-writing and 35 cents per hour of interviewing, adding some expense for the Rorschach test, all of which made him a relatively wealthy man by Hopi standards. Whether or not he remains today the most-documented native person in history, he was the fount of an ‘enormous body of data’, wrote the author of a psychosexual re-study of the Talayesva corpus. Likewise, for another eminent anthropologist, he provided ‘a storehouse of substantive data’. The man himself became a kind of data pipeline.

The pioneering sociological studies targeted not only individuals but also large groups. The anthropologist Melford Spiro psychologically tested all inhabitants of an entire island in the Western Pacific (Ifaluk) during the same post-Second World War years as neighbouring atolls in the area (Bikini, among others) were sites of intensive nuclear tests. For his academic research, Spiro data-mined whole populations. For American Indians, this ongoing process constituted what the historian Thomas Biolsi calls ‘internal pacification’. In a study of Sioux history between the 1880s and 1940s, Biolsi shows how investigations of Sioux life delved increasingly into psychological domains. Such research probed further and further into the remote psyche, which was treated as a kind of territory to be mapped. But the mapping also helped change the territory. Not evenly or regularly, but painstakingly, a transformation of the Sioux ‘self’ was underway, as Biolsi describes it, and the process of being measured, counted, quantified and (eventually) tested served to aid and abet the subjective changes taking place. In effect, such research subjects were canaries in coalmines.

Experiments in the range of ways to get at what could be considered internal data or what specialists called ‘subjective materials’ extended from Indian reservations to occupied areas to reformatories, factories and armies. Large punch-card-driven statistical enquiries opened up new possibilities, as in the US Army’s landmark The American Soldier project. Starting on 8 December 1941, the day after Pearl Harbor, and continuing until the last days of the war, the Army’s Research Branch administered more than 200 questionnaires, each usually given to a sample of around 2,500 soldiers, some in overseas theatres of battle and remote outposts.

The result was ‘a mine of data, perhaps unparalleled in magnitude in the history of any single research enterprise in social psychology or sociology,’ in the words of the project’s director, Samuel Stouffer. The American Soldier project provided unique access to the inner states of soldiers as a resulting publication put it, an unbiased look at ‘what the soldier thinks’.

Early audiences for the Lumière brothers’ films, especially crowds viewing footage of the oncoming train that seemed about to penetrate their cinema screen, ran out of the theatre in panic because they had not yet become trained in the illusionary calculus involved in making the experience of watching films enjoyable – at least according to the myths surrounding Arrival of a Train at La Ciotat Station. Made in 1895, with its first public showing in 1896, the 50-second film comprised one continuous shot of an everyday occurrence a train steaming into station yet the camera was positioned on the platform to produce the feeling of a locomotive bearing down on the seated viewer. These ‘naive’ audiences confused signals that indicated one scenario (danger) with another (watching a film about a dangerous situation).

One sees replications of this process of getting acclimated to new techniques in the arena of penetrating social-science instruments, and to its modern incarnation in big data. Early on, citizens seemed to have little resistance to being asked questions by phone pollsters, whereas today as few as 3 per cent will answer questions by phone if they even have a landline. Technology and resistance arose hand in hand.

As the ‘man on the street’ interview debuted in the 1950s and ’60s, members of the public initially watched in bemusement or alarm as strangers posed random questions accompanied by recording devices. A wonderful depiction of this process appears in the classic 1961 cinema-vérité documentary Chronique d’un Été by the anthropologist Jean Rouch and the sociologist Edgar Morin, in which work-worn Parisians exiting the Métro encounter two snappily dressed young women pointing microphones and pressing into their personal space, asking: ‘Monsieur, are you happy?’ The query occasions a range of responses from blank to flirtatious to heart-rending. There is as yet, however, no sense of this as a normal activity, as one can see from the standardised ease with which college students or city commuters answer pointed questions today.

subjects turned researchers’ techniques to their own purposes, asking snarky questions, fomenting rebellion or teasing sociologists

By the second half of the 20th century, citizens (particularly urban dwellers) became increasingly accustomed to the possibility that intrusive questions might be asked at any time and answers expected; evasions also became normalised. The famous Kinsey Report research, built on thousands of interviews, stimulated a wave of prank social surveyors asking women intimate questions about their sex lives. Pretending to be working for the Kinsey report, these caddish pretenders often received fulsome answers until the surprisingly trusting public was warned of predatory practices. At other times, prospective participants queued in eagerness to take part in Kinsey interviews on sexual behaviour, many reporting exhilarating effects that came from feeling oneself to be ‘an infinitesimal cog in one of the greatest fact-finding projects ever undertaken… the first great mass of facts, and figures drawn from a cross-section of all social and educational groups, from which charts, curves and finally conclusions may be drawn,’ as one interviewee reported.

Also around the mid-20th century, a Harvard Business School team under the industrial psychologist Elton Mayo pioneered the gathering of intimate interviews with employees of the Hawthorne Works factory in Cicero, Illinois, carrying out some 20,000 interviews. They aimed to capture what another eminent social scientist famously called the ‘elusive phenomena’. Their answers remain on file at Harvard’s Baker Library, a curious archive of the mundane details of the lives of factory girls circa 1938 or 1941. Jennie, for example, provided the interviewer with details of her evolving hairstyle, hoped-for Christmas gifts, and proclivity to wear her stockings rolled when working on hot days. Assembly-line girls joked about going out drinking the night before and slowing production during the day. As with anthropologists’ American Indian subjects who often spoke back via purportedly neutral measuring instruments (in one case in 1885, Sioux respondents answering a census survey supplied names such as ‘Shit Head’ and other obscenities), the Hawthorne subjects turned researchers’ techniques to their own purposes, at times asking snarky questions, fomenting rebellion, or teasing visiting sociologists.

One day, perhaps not long from now, people will look back at our current decade amazed at the ease and naiveté with which we, enchanted users of new tech, failed to see the value of our own behavioural data resources, and therefore gave them away for little more than ease of use, entertainment value and dubious accretions of status. That is one possibility. On the other hand, the more we can see the process at work, the less the average user falls sway to the hype of ‘never before’. It becomes possible to disintegrate what is actually new about data-gathering capabilities arguably: scale and granularity from those tendencies that existed before, sometimes long in the past.

A recent White House report on ‘big data’ concluded: ‘The technological trajectory, however, is clear: more and more data will be generated about individuals and will persist under the control of others.’ When trying to understand the ramifications of this big-data trajectory, I argue, it is necessary again to bear in mind that the data is not only generated about individuals but also made out of individuals. It is human data.

In parallel with researchers’ increasingly aggressive collection of personal data, modern research subjects became trained how to participate, how to answer, how to obligingly offer up the precincts of the self to scrutiny – our own and others’. This training has prepared us for a new level of intrusiveness. We are all now primed to give ourselves up to big data.

To look at the history of the quest to scoop up the totality of human behaviour in a scientific net is to illuminate the present obsessions. In the end, we see that the attempt to capture all the parts of human experience mostly boiling it down to its everyday-ness reveals many elements that are familiar, but also some that are distinctly and wildly different. Big data is not a project suddenly spawned by our just-now-invented digital technologies, although it is transformed by them. Instead, we can see that it is a project at the driving core of all of modern life. In many ways, it crowns long-held ambitions to build a transparent machinic self, one capable of optimisation as well as automation.

We need to see the human in the data machine

The behavioural sciences in the 20th century, particularly as practiced by Americans spanning the globe, engaged in an ambitious push to capture ever-more-intimate parts of human experience and to turn them into materials amenable to manipulation by clever machines. This was a prelude to the Rubicon now known as big data. These historical projects, sometimes more and sometimes less closely aligned with government and military sources of support, ran on complex hodgepodge combinations of old and new technology, and paved the way for our own moment in which corporate-cum-research entities feed government data mills rather than the other way around.

This is why the erstwhile goal to gather large amounts of what specialists called ‘human materials’ resonates so strongly today. It speaks to the tension between humans and materials, and the desire to turn one into the other. What the Swiss biological historian Bruno Strasser calls the ‘supposedly unprecedented data-driven sciences’ are not so unprecedented. For that reason, it is necessary to understand what came before in order to grasp what is actually new.

Preceding examples of innovative data collection already targeted inner provinces, and already engaged in subjective data-mining. They were unable to do so on anything resembling the scale today possible by use of digitally derived data streams. Nonetheless, the old imperative to mine inner worlds finds a place at the heart of today’s practices. By being arrayed in new tech, and by being incorporated in new ways into our human experiences, it is transformed. As are we. But if we really want to understand that transformation and to speak up about it if we want to see what is truly new rather than what is bumptiously paraded as new we will need to be anchored in the historical particulars. We need to see the human in the data machine.

Friday, 7 April 2023

By brute force

 In their blind variations of senseless combinations, artists and Bitcoin miners both reach for a kind of grace

The first time I went to visit a pair of former artillery sheds on the outskirts of Marfa, Texas, the sun was high in the sky and I knew very little about what I was about to see. The second time, the sun was setting and I knew a bit more, and everything was completely different.

The contents of the sheds are officially known as 100 Untitled Works in Mill Aluminum (1982-1986) by Donald Judd. There are about 60 works in one shed and about 40 in the other. Each one is a box measuring 41 by 51 by 72 inches, and no two boxes are exactly alike. (Bear with me here: I’m about to make one of the most captivating pieces of art I’ve ever seen sound as dry as a tax return.) The boxes have open sides or half-open sides; their sides extend all the way to the corners or are offset by four inches; they are empty or contain internal dividers. The dividers are vertical or horizontal or lateral; they are perpendicular or diagonal; they are single or double; they divide the whole box or half the box; they extend all the way to the corners or are offset by four inches. That’s it; no other variations are allowed. But just as there are, we are told, 221,184 ways to order a Whopper, this still gives an enormous number of possible boxes.

When I first saw them, I assumed that Judd must have begun by generating every single viable permutation and then picked 100 to put on show after the fact. Perhaps, I thought, he chose at random, or had many more than 100 boxes fabricated, editing them down once he’d seen which ones looked best. That’s how I would have done it. It would be far too tedious to design the boxes one by one, because you’d have to remember all the configurations you’d already used in order to avoid repeating yourself: even if you employed some sort of special notation, the task would be maddening enough by the 50th box, let alone the 90th.

But another reason for my assumption might have been the time of day. The even and indirect light of a Texas morning through the sheds’ tall windows gave the boxes a quality that you wouldn’t call inert, but that you could very well call implacable, chilly, unwavering. For all their beauty, it was easy to believe that they had been spat out by some sort of algorithm.

When I went back late the following afternoon, there was nothing cold about them any more. The sunset was doing extraordinary things. Some boxes filled up with bronze, some filled up with shadow. Some turned into mirrors, lenses, fishtanks. For almost an hour, wandering among them, I had the sense of qualities such as solidity, opacity and reflectivity moving back and forth between them like assets between bank accounts. And because I’d spent a few minutes in the gift shop flipping through Marianne Stockebrand’s hefty book Chinati: The Vision of Donald Judd (2010), I now knew that Judd did indeed design the boxes one by one, or at least in smallish groups, over a period of several years. There was never a master list.

To the question of how he could grind out 100 different configurations without inadvertently repeating himself or going out of his mind, there are two answers. The first is that, on the evidence of his sketches, he actually did repeat himself a few times, but managed to catch the duplicates before the blueprints were sent off to Lippincott, Inc., a specialist art fabrication company in Connecticut. The second is that Judd was a professional Minimalist (even if he would not have used that term). Variations and iterations were his whole method. Keeping these boxes straight wouldn’t have been as trying for him as it would be for you or me. He loved every one of his children.

My second visit to 100 Untitled Works in Mill Aluminum gave me that rarest of experiences, a genuine epiphany with a work of art. The boxes had bloomed and deepened before my eyes. I went from complete indifference about Judd to real fascination. So why was I so disappointed by what I’d learned?

There is one field in which people routinely do use the approach that Judd didn’t, and that’s cryptography. A ‘brute-force attack’ means guessing a password or encryption key by methodically trying out every possible combination of letters and numbers until you find one that works. Quite a few of Judd’s contemporaries in the art world were experimenting with brute-force attacks of various kinds, but it is probably the Italian artist Alighiero e Boetti whose work best endures prose summary.

For his project Untitled (Victoria Boogie Woogie) (1972), Boetti chose seven different postage stamps that added up to 200 lira, the cost of posting a letter in mainland Italy at the time. He bought 5,040 of each of these stamps and stuck them in all 5,040 possible combinations to 5,040 envelopes addressed to his own house in Turin. Over the next two years, he mailed them from various places, finally exhibiting them in 42 framed groups of 120 postmarked envelopes each. (If my rough calculations are correct, the stamps alone would have cost him about £5,000 in today’s money.)

The first difference between Boetti’s Untitled and Judd’s 100 Untitled Works is that Boetti let his permutations keep running until, like an overstimulated toddler, they had thoroughly exhausted themselves. Having concocted the rules of the exercise, he made no further choices: there was no preemption and no selection. The second difference is that all of Boetti’s envelopes are equal. No ordering of the stamps is any more attractive than any other. From Judd’s 100, on the other hand, certain stars are born. Some boxes look the same at sunset as they do at noon, but others foster optical illusions of various kinds. It could be that Judd was able to predict in advance which configurations would have this property. But it seems more likely to me that the sheer multiplicity of the boxes allowed him to stumble on it by chance.

even people who read the Bible cover to cover in prison don’t trudge through all the begetting

There are at least a few brute-force attacks in other forms of art. We find them, for instance, in the concrete poetry of Brion Gysin, a close friend of William Burroughs. Gysin’s 120-line poem ‘I AM THAT I AM’ (1960) simply cycles through the 120 different orders into which those five words can be scrambled. With the help of an English mathematician called Ian Sommerville, Gysin used an early computer program to generate his permutations. When the poem was broadcast on BBC radio in 1961, it supposedly got the second-lowest audience ratings of any programme to date. What was the point of it? Partly it’s the hypnotic effect on the eye or the ear, but its main appeal is surely that among the gibberish – ‘I I THAT AM AM’, ‘AM I AM THAT I’ – there are a few quasi-meaningful pronouncements – ‘I THAT AM I AM’, ‘AM I I THAT AM’– produced without any human intervention whatsoever (just as a few of Judd’s boxes happen to do clever things with the light, but most don’t).

The single most enduring monument to the brute-force attack in literature is Samuel Beckett’s novel Watt (1953). Here, to give an impression of the technique, is the first time it comes up:

Now these voices, sometimes they sang only, and sometimes they cried only, and sometimes they stated only, and sometimes they murmured only, and sometimes they sang and cried, and sometimes they sang and stated, and sometimes they sang and murmured, and sometimes they cried and stated, and sometimes they cried and murmured, and sometimes they stated and murmured, and sometimes they sang and cried and stated, and sometimes they sang and cried and murmured and sometimes they cried and stated and murmured, and sometimes they sang and cried and stated and murmured, all together, at the same time, as now, to mention only these four kinds of voices, for there were others.

The novel contains 36 more passages like that, many of them longer, a few of them up to three or four pages.

Beckett wrote Watt when he was hiding from the Gestapo in the south of France. Obviously, unlike Gysin, he did not have a computer to help him. ‘One imagines that in some situations in the years 1942-44 it took longer to become fatigued or disgusted with such things,’ suggests the American critic Dr John J Mood in a 1971 essay on the novel. ‘One also imagines that all sorts of people had all sorts of little semi-compulsive games with which to occupy themselves at that time. Indeed, in some situations at that time such compulsions were probably necessary if one were to survive with any sanity.’

In other words, because Beckett was bored for three years, readers would be bored for generations to come. That’s how a harsh critic might put it – not me! I quite like Beckett – but there cannot be many who so much as flick their eyes across every line of his three-page combinatorial extravaganzas; even people who read the Bible cover to cover in prison don’t trudge through all the begetting. The sheer weight of them is enough to make the point.

Now, did you notice that something was amiss with the passage from Watt quoted above? Of course you did. This particular brute-force attack should contain 15 permutations but only has 14, and the one it so brazenly omits is ‘sang and stated and murmured’. Dr Mood takes on the heroic task of analysing every single one of these 37 passages and discovers a total of 22 such errors. He argues that all of those errors were deliberate, since the novel is so much about incompetence and faultiness, and for the purposes of this essay I’m going to assume that Dr Mood is correct.

So in one respect Beckett is closer to Judd than he is to Boetti or Gysin, because he meddles with his outcomes. But in another respect he’s closer to Boetti than he is or Gysin or Judd, because his brute-force attacks aren’t intended to generate spectacle or meaning by accident – none of the dozens of permutations in a three-page litany stands out from any other.

In his own essay on Watt in 1992, the French philosopher Gilles Deleuze wrote:

The combinatorial is the art or science of exhausting the possible, through inclusive disjunctions. But only the exhausted can exhaust the possible, because he has renounced all need, preference, goal or signification. Only the exhausted is sufficiently disinterested, sufficiently scrupulous. Indeed, he is obliged to replace projects with tables and programs denuded of sense.

Disinterested. Scrupulous. Deleuze seems to be imagining the creator as a sort of ideal bureaucrat or scrivener. This is just right for Boetti, who voluntarily turned himself into a mailroom clerk. In his essay ‘Paragraphs on Conceptual Art’ (1967), the American artist Sol LeWitt describes a similar demotion:

When an artist uses a conceptual form of art, it means that all of the planning and decisions are made beforehand and the execution is a perfunctory affair. The idea becomes a machine that makes the art… [T]he fewer decisions made in the course of completing the work, the better. This eliminates the arbitrary, the capricious, and the subjective as much as possible.

If Deleuze was right about the aims of Watt, then Beckett, shivering in his farmhouse, beat the New York conceptualists to this idea by 20 years – although it’s also possible that Deleuze, writing in the 1990s, was looking at the ’40s through the lens of the ’60s. Either way, what we seem to be talking about here is liberation from ego. And indeed, the connection with Zen Buddhism is explicit in the music of John Cage, who knew LeWitt and had friends in common with Beckett. Cage, after all, attributed his aleatoric compositional methods to the revelations he’d had at the feet of the Zen teacher D T Suzuki.

We might even extend this circle to include David Foster Wallace. The IRS accountants in his posthumous novel The Pale King (2011) seem very much like the contented, almost angelic bureaucrats that Deleuze evoked and Boetti imitated. ‘Bliss,’ Wallace wrote, ‘a second-by-second joy and gratitude at the gift of being alive, conscious – lies on the other side of crushing, crushing boredom. Pay close attention to the most tedious thing you can find (Tax Returns, Televised Golf) and, in waves, a boredom like you’ve never known will wash over you and just about kill you. Ride these out, and it’s like stepping from black and white into color. Like water after days in the desert. Instant bliss in every atom.’

Why was it so disappointing to learn that Judd’s method did not, in fact, ‘eliminate the arbitrary, the capricious, and the subjective’? We now have a draft of an answer. The brute-force attack reveals itself as a kind of grace, and any second-guessing becomes a fall from it. If you just let the permutations permute, they envelop you in a wholeness, a blankness, an infinite unselfconsciousness that shatters the moment you try to impose your contingent preferences. Perhaps this was why Beckett couldn’t allow those passages to roll on faultlessly. As Dr Mood observes, everything in his work has cracks.

Let’s go back to the original meaning of the brute-force attack. For some modes of encryption, brute force is no longer effective. Bank transactions, for instance, now tend to use either 128- or 256-bit encryption, which means the key can be expressed as that number of 1s and 0s. According to one estimate, if you tried to use an existing supercomputer to guess a 128-bit key by trial and error, it would take about 150 trillion years. And even if you invented the fastest conceivable supercomputer, the laws of thermodynamics still dictate that, in the words of the American security analyst Bruce Schneier, ‘brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space’.

The same does not apply to the passwords that human beings come up with themselves. If your password isn’t unusually long, any laptop with the right off-the-shelf upgrades could crack it in days or hours. In practice, though, attacks on passwords are not naive brute-force attacks. Because of security breaches at various websites, today’s hackers have access to lists of millions of passwords that real people have used in the past. They combine these with text from dictionaries, Wikipedia and the Bible to give their software a head start, on the basis that ‘123456’, ‘qwerty’ and ‘jesuswept’ are a lot more common than ‘$o61e^60hY’. Like Judd, they decide in advance which permutations are likely to be most effective.

There is, however, one field in which brute-force attacks are still the state of the art, and that’s cryptocurrencies. Bitcoin has fallen a long way from its December peak, both in terms of its value and its reputation. But though you might be forgiven for shelving it next to Swatch’s Internet Time in your mental archive of online fads, its volume of transactions remains impressive: Bitcoin moves about $35 million dollars in payments every day. Meanwhile, new cryptocurrencies are being released into the wild on a regular basis, from the robust Mastercoin to the preposterous Coinye to the preposterous and surprisingly robust Dogecoin. Most of these are premised on a similar ‘mining’ idea. Since not a single one of the zillion trend pieces you’ve already read about Bitcoin will have contained a satisfactory account of what this is, there now follows an attempt at an explanation of cryptocurrency mining by someone who doesn’t really understand it either.

there are legions of greedy mathematicians working on the problem as we speak: this might be our era’s equivalent of the quest for a perpetual motion machine

When new ‘coins’ are introduced into circulation, they are allocated to whomever has been helping to maintain the ‘block chain’, the public ledger of every transaction ever made with the currency. Maintaining the block chain involves generating new ‘hashes’, or encrypted records of the transactions. This ought to be easy enough, except that in this case the process is specifically designed to be laborious. It takes a tremendous amount of donkeywork to generate a single valid hash, which is what prevents new coins from being minted too quickly. At the time of writing, the total computing power being devoted to the mining of bitcoins alone is about 350 exaFLOPS (for floating-point operations per second) of, to quote Deleuze again, ‘tables and programs denuded of sense’, or 1,400 times the power of the top 500 supercomputers in the world combined. Mining of this kind is an unadulterated brute-force attack: it involves trying number after number after number after number after number until you find a hash that works.

Bitcoins are still valuable enough that anyone who could discreetly find a shortcut around this process would get very rich. As a result, you can be certain that there are legions of greedy mathematicians working on the problem as we speak. But this might be our era’s equivalent of the quest for a perpetual motion machine. According to the disinterested researchers who have made their findings public, the notion of circumventing the brute-force attack to mine cryptocurrency is nothing but a fantasy. It will never fall from grace.

And it seems to me that the Bitcoin network meets LeWitt’s description of conceptual art. If nothing else, it conforms to his edict that ‘the basic unit’ of the work ‘be deliberately uninteresting so that it may more easily become an intrinsic part of the entire work’. Boetti’s absurdist bureaucrat has been downsized: a computer is capable of accomplishing nothing whatsoever in far less time. ‘Art is noble through being useless,’ wrote the French-born historian of ideas Jacques Barzun, and in the perversity of these easy calculations made deliberately hard we recognise a festival of uselessness. One might object that cryptocurrency mining isn’t useless because, if all goes well, it makes money for the person who does it. But then, so does art.

If you’re a brute-force attacker in the software field, you’re not hoping to stumble across anything interesting on your way to the solution. There will be no ‘I THAT AM I AM’ glittering in the dust. But that’s only because you don’t know which permutations to look for. Any sufficiently large corpus of random data will contain reams of occult knowledge, just like Jorge Luis Borges’s Library of Babel. A Bitcoin mining program will generate the exact time and GPS co-ordinates of your death. A password cracking program will generate a password of the form ‘iloveyou[first name][last name]’, where [first name] [last name] is the husband or wife you haven’t met yet. These things will happen many times every second.

Before his story ‘The Library of Babel’ (1941), Borges wrote an essay on the same subject called ‘The Total Library’ (1939), in which he explicitly connects the fecundity of randomness to the creation of the universe. He quotes Cicero’s On the Nature of the Gods:

I do not marvel that there should be anyone who can persuade himself that certain solid and individual bodies are pulled along by the force of gravity, and that the fortuitous collision of those particles produces this beautiful world that we see. He who considers this possible will also be able to believe that if innumerable characters of gold, each representing one of the 21 letters of the alphabet, were thrown together onto the ground, they might produce the Annals of Ennius. I doubt whether chance could possibly create even a single verse to read.

Cicero was wrong, of course. If you asked a computer to throw the letters of the alphabet ‘on the ground’ enough times, it would indeed produce the Annals of Ennius, although it would take even longer than cracking a 256-bit key (and anyway, we wouldn’t know when it had succeeded, because the Annals of Ennius have not survived).

Moreover, most of us now believe that ‘this beautiful world that we see’ really was produced by the fortuitous collision of particles. First of all, matter from the solar nebula had to clump together to create a habitable world. Second, a dribble of that matter had to organise itself into self-replicating molecules. Third, these self-replicating molecules had to mutate into human beings, because we are not books from the Library of Babel, but rather, in the American philosopher Daniel Dennett’s phrase, books from the Library of Mendel. Our current conception of our own origins resembles one of those puzzles where you have to keep shaking and tilting until all the ball bearings roll into the little holes, except that inside every one of those ball bearings is another, smaller puzzle which you have to solve at the same time. The universe took nearly 14 billion years to get all the balls in the holes, and we are very lucky it happened at all. Oh, and on top of all that, Mum and Dad had to meet and perhaps fall in love.

The zen of permutation still holds, but we must also make room for a second reason why the work of Judd, Beckett, Boetti and all their diligent colleagues can be so entrancing. We recognise in the brute-force attack the improbable story of our own birth. The 100 Untitled Works in Mill Aluminum are, therefore, cheats, unless your favoured account of our origins involves a few nudges and whittlings on the Creator’s part. In that case, you might prefer Judd’s method to LeWitt’s total ban on caprice and subjectivity. Either way, the boxes are the product of a small selection of rather mundane ingredients combined in a variety of ways, and from these combinations come wonders that could not possibly have been predicted from the ingredients alone. So we lose ourselves in permutations because we are permutations too. As the sun sets over Marfa, we see our own reflections in the box’s glossy sides and think: ‘I am that – I am.’

Connect broadband