Sir Barnes Neville Wallis, CBE, FRS, RDI, FRAeS, born on Septembe, 26, 1887 in Ripley, Derbyshire, is often remembered for his role in the development of the famous "bouncing bomb" during the Second World War. However, his contributions to science, engineering, and aeronautics extend far beyond this iconic invention. A visionary in the truest sense, Wallis's legacy includes groundbreaking work in airship design, aircraft development, and advanced weaponry, in addition to, shaping the course of 20th-century technology.

Terry Bailey explains.

Barnes Neville Wallis.

Early life and education

Wallis's early life provided the foundation for his eventual career in engineering. His father, Charles Wallis, was a doctor, but young Barnes developed an early fascination with mechanical objects, much to his father's frustration. After attending Christ's Hospital school in Sussex, where he displayed a knack for mathematics and science, Wallis pursued an apprenticeship at Thames Engineering Works. However, he subsequently changed his apprenticeship to J. Samuel White's, the shipbuilder based at Cowes on the Isle of Wight originally training as a marine engineer, he took a degree in engineering via the University of London external program.

 

Contributions to Airship design

Wallis's early career saw him make significant contributions to the development of airships. In 1913, he joined Vickers, a company heavily involved in aeronautics, where he began working on lighter-than-air vehicles. He played a pivotal role in the design of the R100, a large British airship intended for long-range passenger travel.

The R100 project was part of a competition with the government-sponsored R101, which ultimately ended in disaster with the crash of R101, a craft of a different design to the R100. While the R101's failure effectively ended the British airship program, the R100 itself was a technical success, in large part due to Wallis's innovative structural design, which utilized a geodesic framework. This design became a hallmark of Wallis's work.

The geodesic framework was notable for its strength and lightweight properties. This design not only enhanced the airship's durability but also reduced its overall weight, making it more fuel-efficient. The R100's successful transatlantic flight to Canada in 1930 was a testament to the efficacy of Wallis's design, even though the airship program was ultimately scrapped after the R101 disaster.

 

Transition to aircraft design

After the decline of airship development, Wallis turned his attention to aircraft design. His expertise in geodesic structures led him to work on the Vickers Wellington bomber, which was used extensively by the Royal Air Force, (RAF) during the Second World War. The Wellington's geodesic structure made it incredibly resilient to damage. Unlike conventional aircraft, the Wellington could sustain considerable battle damage yet continue flying due to its ability to retain structural integrity even after losing large sections of the skin or framework.

This durability made it a valuable asset during the war, particularly during the early bombing campaigns. Wallis's work on the Wellington showcased his ability to apply innovative design principles to aircraft, extending the operational capabilities and survivability of warplanes. The Wellington aircraft became one of the most produced British bombers of the war, with more than 11,000 units built, attesting to the practical success of Wallis's engineering philosophy.

 

The Bouncing Bomb and the Dam Busters Raid

Wallis is perhaps most famous and remembered for his invention of the bouncing bomb, which was used in the Dam Busters Raid (Operation Chastise) in 1943. This operation targeted key dams in Germany's industrial Ruhr region, aiming to disrupt water supplies and manufacturing processes critical to the Nazi war effort. The bouncing bomb, officially known as "Upkeep," was an ingenious device that skimmed across the surface of the water before striking the dam and sinking to the optimal depth, then detonated when a hydrostatic pistol fired. In addition to, upkeep two smaller versions were also developed, High-ball and Base-ball.

The design of the bomb required not only advanced physics and mathematics but also extensive practical testing. Wallis conducted numerous experiments with scaled-down prototypes to perfect the bomb's trajectory and spin, ensuring it could bypass underwater defenses and inflict maximum damage, before conducting half and full-scale tests of the bomb. The Dam Busters Raid, though not as strategically decisive as hoped, was a major tactical and propaganda victory that demonstrated the effectiveness of precision engineering in warfare. It also solidified Wallis's reputation as one of Britain's foremost wartime inventors, and designers.

 

Beyond the Bouncing Bomb: The Tallboy and Grand Slam

While the bouncing bomb is Wallis's most well-known design, his development of the "Tallboy" and "Grand Slam" bombs was arguably more impactful. These were so-called "earthquake bombs," designed to penetrate deeply into the ground or fortifications before exploding, causing immense structural damage. The Tallboy, weighing 12,000 pounds, was used effectively against hardened targets such as U-boat pens, railway bridges, and even the German battleship Tirpitz, which was sunk by RAF bombers in 1944.

The Grand Slam, a 22,000-pound bomb, was the largest non-nuclear bomb deployed during the war. Its sheer destructive power was unparalleled, and it played a crucial role in the final stages of the conflict, helping to obliterate reinforced German bunkers and infrastructure. Wallis's work on these bombs demonstrated his understanding of the evolving nature of warfare, where the destruction of heavily fortified targets became a priority.

 

Post-War Contributions: Advancements in supersonic flight

After the war, Wallis continued to push the boundaries of engineering, particularly in the field of supersonic flight. He began working on designs for supersonic aircraft, foreseeing the need for faster travel in both military and civilian aviation. His proposed aircraft designs included the "Swallow" which was a supersonic development of Wild Goose, designed in the mid-1950s and was a tailless aircraft controlled entirely by wing movement with no separate control surfaces.

The design intended to use laminar flow and could have been developed for either military or civil applications, both Wild Goose and Swallow were flight-tested as large (30 ft span) flying scale models. However, despite promising wind tunnel and model work, these designs were not adopted. Government funding for Wild Goose and Swallow was cancelled due to defense cuts.

Although Wallis's supersonic aircraft designs were never fully realized during his lifetime, they laid the groundwork for later advancements in high-speed flight. The variable-sweep wing technology he envisioned was later incorporated into aircraft such as the F-111 Aardvark and concepts of supersonic flight in the iconic Concorde, the world's first supersonic passenger airliner. Wallis's vision of supersonic travel outlined his enduring ability to anticipate technological trends.

 

Marine engineering and submersible craft

Wallis's inventive spirit was not confined to aeronautics. In the post-war years, he became involved in marine engineering, focusing on the development of submersible craft and weaponry. One of his notable projects was the development of an experimental rocket-propelled torpedo codenamed HEYDAY. It was powered by compressed air and hydrogen peroxide that had an unusual streamlined shape designed to maintain laminar flow over much of its length.

Additionally, Wallis also explored the development of deep-sea submersibles. His work on underwater craft highlighted his interest in new forms of exploration and transportation, aligning with the burgeoning post-war interest in oceanography and underwater research. As part of this exploration of underwater craft, he proposed large cargo and passenger-carrying submarines, that would reduce transportation costs drastically, however, nothing came of these designs which probably would have transformed ocean-going transportation.

Due to Wallis's experience in geodesic engineering, he was engaged to consult on the Parkes Radio Telescope in Australia. Some of the ideas he suggested are the same as or closely related to the final design, including the idea of supporting the dish at its center, the geodetic structure of the dish and the master equatorial control system.

 

Later life and recognition

Throughout his life, Wallis maintained a strong commitment to education and mentorship. He was an advocate for the advancement of engineering as a discipline and frequently gave lectures to students and professionals alike. Wallis became a Fellow of the Royal Society in 1945, was knighted in 1968, and received an Honorary Doctorate from Heriot-Watt University in 1969 in recognition of his outstanding engineering achievements. Additionally, he was awarded the Royal Society's prestigious Rumford Medal in 1971 for his work in aerodynamics.

Even in his later years, Wallis remained active in engineering, particularly in exploring the future potential of space travel. His forward-thinking ideas on rocket propulsion and spacecraft design, though largely theoretical at the time, hinted at the emerging field of space exploration, which would become a global endeavor in the following decades.

Wallis passed away on October, 30, 1979, leaving behind a legacy of innovation that continues to inspire engineers and inventors worldwide. His impact on both military and civilian technologies is a testament to his brilliance and determination to push the boundaries of what he knew was possible but others often did not.

 

Legacy

Sir Barnes Neville Wallis, CBE, FRS, RDI, FRAeS, was a true polymath whose influence extended across multiple disciplines. While he is best known for his wartime contributions, particularly the bouncing bomb, his legacy goes far beyond a single invention.

From the geodesic structures of airships and bombers to supersonic aircraft concepts and deep-sea exploration vehicles, in addition to, his innovative ideas on ocean and space exploration and travel. Wallis's career spanned an astonishing range of technological advancements. His ability to marry theoretical physics with practical engineering solutions made him a giant of 20th-century science and technology.

Wallis's story is not just one of wartime ingenuity but of a lifetime spent striving to solve complex problems with creativity and persistence. His contributions continue to resonate today, reminding us that the spirit of innovation is timeless.

 

The site has been offering a wide variety of high-quality, free history content since 2012. If you’d like to say ‘thank you’ and help us with site running costs, please consider donating here.

Candidate Donald Trump thrust immigration issues at the Southern border into the forefront of American politics in the early weeks of the 2016 presidential campaign.   But even then, the issue was not new.

Joseph Bauer, author of Sailing for Grace (Running Wild Press 2024), explains.

A teacher, Mary R. Hyde, and students at the Carlisle Indian Training School. Source: here and here.

At least two years before 2016, large numbers of Central American families, nearly all from the Northern Triangle countries of Honduras, Guatemala, and El Salvador began to arrive in walking caravans at the Texas Border.  Well before Donald Trump emerged, the U.S. immigration system at the border was overtaxed.  Ronald Reagan knew it; George W. Bush (fluent in Spanish) knew it; Barack Obama knew it.  All tried to address it, explaining that congressionally authorized resources were simply inadequate to manage the realities permitted by U.S. immigration law, especially the legal right foreign nationals to seek asylum or refugee status under a federal statute essentially unchanged since 1965.

Efforts for change and improvement all died at the altar of partisan self-interest.  Legislators from across the country—not only those with constituents on the border—concluded that any solution would be more problematic to individual political fortunes than continuing the status quo and arguing the positions most favored by the particular local and regional voters they needed to be elected.

Stoked by political rhetoric from both sides, the public worried about large numbers of new citizens entering the country. Legitimate worries included concern about the strain on public health, school resources, and transportation infrastructure in local communities.  Unfounded worries included concerns about crime.  Numerous studies have documented that immigrants, including those entering legally by asylum and even those “undocumented” persons who are in the U.S. without legal status, commit crimes at materially lower rates than American-born citizens.[1]  The other objection, that newcomers cause Americans to lose or find jobs, has also been refuted. Employers hiring large numbers of workers almost unanimously want as many immigrants allowed in as is possible to fill jobs for which they cannot find applicants otherwise.  And increases in the number of immigrants, by adding to the economy and success of businesses, actually increases the wages and employment opportunities of American-born citizens.[2]

 

2017 and 2018

But in 2017 and 2018, the Trump administration moved, without Congressional authority, to stop the Central American caravans with a new measure:  the broadscale involuntary separation of parents from their children at the Texas border, primarily at El Paso.  The intent of the new policy was deterrence: to discourage asylum seeking families from making their journeys.  If they began to be separated, finding themselves in different countries, it was thought, the seekers would stop seeking.

The program was initially undisclosed by the administration (on the ground that it was merely a “pilot program”) and drew little public or media attention.  It worked as follows.

 A family presented itself to the Border Control agents at the crossing (the recommended pathway for entrance) or on American soil near the border, having managed to reach it by other means (not recommended, but still a legal way to seek asylum under U.S. law).  Following brief interviews, nearly all parents were either detained temporarily in a government facility without their children or summarily deported and sent to the Mexican border, again without their children.  The children, regardless of age, were immediately deemed “unaccompanied minors,” since their parents were no longer present with them, and turned over to the custody of the Homeland Security Refugee Service for temporary housing, often in a church-affiliated respite center, and then ultimately placed in either American foster homes or the home of a qualifying relative somewhere in the U.S., if such a person could be identified.  If a relative could be found at all, the process was often lengthy.

Data on the actual number of children taken from their parents during the Trump Administration are imprecise.  But studies by relief organizations such as the American Council of Catholic Bishops, the Lutheran Immigrant and Refugee Service, and the Washington Office on Latin America have documented that at least 2,000 children were removed from parents between February 2017 and the date the policy became public and official in May 2018; another 2,500 were separated in the 50 days thereafter.   Some estimates are as high as nearly 6,000 children and 3,000 families.  The Trump administration ostensibly stopped the practice on June 20, 2018, in response to public furor and condemnation from all sides of the political spectrum.[3]   Anti-immigrant positions might earn votes for some politicians, but taking children from their parents earned votes for nobody.   What it did was provoke the abhorrence of  the vast majority of (but not all) Americans.

 

Historical context

But was the widescale forced separation of parents and children in 2018 actually new in historical American immigration policy and practice?  Could such a policy have been a reality earlier in the American experiment?  The truth—many would say sad truth—is that it was.  Two prior examples are obvious and known to most Americans.

The first was the long period of legal slavery in the U.S., when millions of African and Caribbean black men and women were forcibly transported to the United States with the approval of the federal and state governments and held here in involuntary servitude.  Those slaves who were able to bring their children with them, or who gave birth to them once here, were routinely sold to new owners, never to see their children or grandchildren again.  There is no denying that slavery in the U.S. was tragically replete with the separation of families.

The second instance was the common practice for decades in the 19th century of removing Native American children from their natural parents and tribes and placing them either in “Indian” boarding schools or the strange Christian homes of white Americans.  (A moving portrait of the practice—and its harmful effects—is depicted in Conrad Richter’s classic novel, The Light in the Forest.)  These involuntary relocations were massive.  Federal and state governments separated as many as 35% of all American indigenous children from their families, according to a 1978 report by the US House of Representatives.[4]

Most of us learned of the above practices in our American educations, if incompletely.  But many may be surprised to learn that family separation in the U.S. occurred at the hands of some state and city governments even into the early 20th century, condoned or overlooked by the federal government.   Prior to 1920, when specific immigration rules were enacted by Congress, state and city governments, motivated by anti-Catholic sentiment, removed an estimated 150,000 to 200,000 children of Irish and Polish immigrants and placed them in Protestant or Anglo-American households, away from their local areas.[5]

We can hope that such interference with the family unit, based on religious hatred, would be unthinkable today.  But it is part of our past.

 

2015 report

In 2015, the American Bar Association’s Commission on Immigration published a report entitled

Family Immigration Detention, Why the Past Cannot be Prologue.  The report addressed the difficult and sad question of the morality of detaining whole families at the border under the Obama administration.  It preceded the family separation policy that the Trump administration implemented 3 years later, which most Americans believe was even sadder and more immoral.

The authors of the ABA report must be disappointed.  The “past” that it examined—the detention of whole families—did not improve.  Instead, it worsened, with a government policy that, at least temporarily, divided the nuclear family unit itself.

 

In this instance, we did not advance from the lessons of our past.  But in history, there is always hope.  Maybe we will learn them at last.

 

 

Joseph Bauer is the author of Sailing for Grace (Running Wild Press 2024), a novel that explores a white widower’s quest to fulfill a promise to his dying wife: to reunite Central American parents with their children separated from them at the Texas Border in 2018.  Mr. Bauer’s previously published novels are The Accidental Patriot (2020), The Patriot’s Angels (2022), and Too True to be Good (2023).  His latest finished manuscript of historical fiction about the lead-up to and conduct of WWII is titled, Arsenal of Secrecy, The FDR Years, A Novel.


[1] See e.g., Undocumented Immigrant Offending Rate Lower Than U.S.-Born Citizen Rate, University of Wisconsin research study funded by the National Institute of Justice (September 2024).  This and many other studies conclude that undocumented “illegal” immigrants commit about or less than half as many crimes as Americans for the same number of persons.  This is true across all kinds of crimes, including murders, other violent crimes, and drug trafficking.  Admitted asylum seekers and refugees also commit far few crimes than American-born counterparts.

 

[2] Immigration’s Effect on US Wages and Employment, Caiumi, Alessandro and Peri, Giovanni, National Bureau of Economic Research (August 2024).

[3] Many sources, including an audit by U.S.  have reported that family separations at the Texas border continued in significant numbers well after the Trump Administration announced a halt to them in June 2018.  See e.g., Long, Colleen; Alonso-Zaldiver, Ricardo. “Watchdog: Thousands More Children May Have Been Separated”. U.S. News & World Report, January 18, 2019.

[4]Sinha, Anita. An American History of Separating Families, American Constitution Society, November 2, 2020.

[5]Americans today almost unanimously believe that our Constitution, by its First Amendment, assures inviolate an individual and collective right to freedom of religion and worship.  But that Amendment, until applied to State and local governments much, much later, did not prevent any state from religious discrimination in its own laws.  Catholics were so generally despised in Massachusetts in the early days of our nation that Catholic priests were forbidden by state law from living there and subject to imprisonment and even execution if they did.

Thousands of political science books and magazines discuss the idea of ​​democratic transformation. For example: how can a country once under authoritarian rule, transform from that to individual and democratic rule? And what do we truly know about dictatorships? Can a democratic country transform into a dictatorial country, despite the pre-existence of a constitution and elections?

Probably the most well-known example of this is Germany: which had a parliament; a multi-party system; laws protecting elections; and laws protecting individual freedoms. At the time, the illiteracy rate was almost zero percent,yet it transformed from a democracy into an expansionist dictatorship in 1933, after Hitler's rise to power.

Nora Manseur and Kaye Porter explain.

Banknotes awaiting distribution during the 1923 German hyperinflation. Source: Bundesarchiv, Bild 183-R1215-506 / CC-BY-SA 3.0, available here.

Early life of Adolf Hitler

A complex history creates the foundation of a man who was able to order the deaths, either directly or indirectly, of over 60 million people. Hitler was a frustrated painter and a vegetarian. His forces occupied 11 countries, some of which he occupied partially, and others completely. Among these countries were Poland, France, Holland, Denmark, Norway, Luxembourg, Yugoslavia, and Greece. Whether we like it or not, the man who failed his initial entrance examinations and who was passed over for positions of leadership, still captured the psyche of nations. Hitler changed the course of history.

The leader of Nazi Germany was born in Austria in 1889. He had reason to hate and fear his father, who was violent towards his mother and used to beat them both severely. In 1907 he attempted to join the Academy of Arts in Vienna, but was rejected twice after he failed the entrance exam. After the death of his mother, Klara Pölzl, at the end of the same year, he moved to live in Vienna, one of the most prominent capitals in Europe. At the time, Vienna’s mayor was a known anti-Semite called Karl Lueger. As a young man who had experienced much violence and rejection, his settlement in Vienna contributed to shaping his ideas, both because of the prominence of and Leuger’s feelings towards the Jews.    

World War I broke out in 1914, so at age 28 Hitler volunteered to join thearmy, where he received the Medal of Courage twice during the war. Despite that, he was not promoted. According to his commanders at the time, Hitler did not have the leadership skills necessary. In 1918, the November Revolution took place in Germany, which led to the transformation of Germany from a federal constitutional monarchy to a democratic parliamentary republic.      

 

End of the war

With the end of the war, Germany surrendered and Kaiser Wilhelm II abdicated the throne, and was ordered to be exiled in the Netherlands. In 1919, the Treaty of Versailles was signed, where Germany was obliged to pay large reparations to the winning side. This was also a new chance, and a new opportunity for Germany. Freed from an authoritarian monarchy, it was now possible for there to be a political opening. German philosophical studies flourished, and new political parties began to spread - and spread their ideas.

The new authorities began to penetrate these new groups and parties. They hoped to use this openness to know more about their ideas and orientations. Hitler, who was still in the army, was one of the informants. In 1919, as an undercover informant he went to a bar where some parties were meeting for discussions, to spy on one of the right-wing parties: the Nazi Workers' Party.

Unlike others in Germany at the time, Hitler did not see this as an opportunity for the nation to grow and form new ideas. The sudden decision to surrender, instead felt like a keen betrayal and only fed the anger inside the young man. After Hitler heard their discussions, he was very impressed by their ideas about the parties' betrayal of the German Army, and their scapegoating of Jews in Germany’s defeat. Rather than informing others of the anger, Hitler instead joined them. In a short time, he became one of the most prominent leaders in that party eventually known as the National Socialist German Workers' Party

 

Nazi Party

Their goals, plainly, were against Judaism, communism and capitalism. Their arrogance was equally as lofty as they believed that as part of the Aryan race, they were themselves descendants of the inhabitants of the legendary continent of Atlantis. To them, who else should rule the world and return Germany to her proud place with all her former glory, power, and prestige? The Nazi Party carried out propaganda and issued its own newspaper to spread its ideas and beliefs. They attracted the attention of additional officers who were against the surrender decision and the government’s plans to reduce the size of the army.

An early ally of Hitler’s was an officer in the German Imperial Army, Ernst Röhm. Initially a friend and ally of Hitler, Röhm was also the co-founder and leader of the “Storm Troopers,” the original paramilitary wing of the Nazi Party. Rather than dispose of the weapons he had taken possession of, Röhm instead armed the militias and party members with them. With a country unstable from a war, and weapons in the hands of angry men who blamed outsiders for their shame and defeat, the party was well positioned to strike for power.

In 1921, Hitler was elected leader of the Nazi Party, and in 1923 the golden opportunity appeared. Because Germany did not have the money to pay its reparations to the Allies, the government decided to print money. The amount of money printed increased without the value increasing in proportion, and the German mark lost its value and collapsed. Prices increased, and a wave of great inflation hit Germany and became known as German hyperinflation.

 

Continuing instability

In response to the failure to pay reparations imposed by the victorious powers after World War I and the Treaty of Versailles, France and Belgium occupied the Ruhr region of Germany. Hitler felt that this was an opportunity to seize power without elections, and staged a coup d’etat. This failed. Hitler      was arrested and was sentenced to five years in prison. The Bavarian Supreme Court pardoned him, however, and Hitler only remained in prison for nine months before he was released.              

In 1928, Hitler decided to participate in the elections, losing by 2.5%, as the Germans once again rejected the Nazi proposal. But when the American stock market collapsed in 1929, it had a major impact on the whole world. As unemployment in Germany reached up to an estimated 6 million, the atmosphere became ripe for radical proposals -fertile ground for right-winger Nazis and left-wing      Communists.

The Nazi Party took advantage of the opportunity to appear as saviors of the German people, for example, by providing aid to the unemployed, which made them the most popular party in Germany. In 1932, the Nazi Party, led by Hitler, became the largest German party, winning 37% of the votes. In 1933, the President of Germany appointed Hitler as chancellor, and Hitler came to power.

 

Find that piece of interest? If so, join us for free by clicking here.

Posted
AuthorGeorge Levrier-Jones

The 1942 Cripps Mission took place during the middle of World War 2. It was an attempt in March of that year by Britian to secure greater Indian co-operation to World War 2. It involved Stafford Cripps, a member of the British cabinet, meeting various Indian political leaders.

Bilal Junejo explains.

A sketch of Stafford Cripps.

Whenever it is the purpose of a (political) mission which has to be ascertained, it behoves one to ask three questions without delay: why was the mission sent at all; why was it sent only when it was; and why did it comprise the individuals that it did. Unless such well-meaning cynicism is allowed to inform one’s analysis, it is not likely that one will be able to pierce the veil cast by official pronouncements for public consumption upon the true motives of those who were instrumental in bringing about the mission’s dispatch in the first place. There is, alas, no such thing as undue skepticism in the study of a political event.

So, to begin with, why was the mission in question — which brought with it an offer of an immediate share for Indians in the central government (Zachariah, 2004: 113) if they accepted “a promise of self-government for India via a postwar constituent assembly, subject only to the right of any province not to accede (Clarke and Toye, 2011)” — dispatched at all? A useful starting point would be Prime Minister Churchill’s declaration, when announcing in the House of Commons his administration’s decision to send a political mission to India, that:

“The crisis in the affairs of India arising out of the Japanese advance has made us wish to rally all the forces of Indian life to guard their land from the menace of the invader … We must remember also that India is one of the bases from which the strongest counter-blows must be struck at the advance of tyranny and aggression (The Times, 12 March 1942, page 4).”

 

Japan in the war

Since entering the war just three months earlier, Japan had already shown her might by achieving what Churchill would call “the worst disaster and largest capitulation in British history” — namely the surrender of over 70,000 British and Commonwealth troops in Singapore, a British possession, in February 1942 (Palmer, 1964: 299) — and occupying thereafter the British colony of Burma, on India’s eastern border, in March — a development which marked the first time since the outbreak of war in September 1939 that India, Great Britain’s most cherished imperial possession, was directly threatened by the enemy. No such threat (or a vociferous demand for independence) had arisen at the time of World War I, which was why no similar mission (with a concrete offer) had been dispatched then. For over two years after its outbreak, no mission was dispatched during World War II either, even though a clamor for independence, spearheaded by the Indian National Congress (India’s largest political party), was existent this time. It was only the Japanese advance westward that changed the picture. In Burma, the Japanese had been “welcomed as liberators, since they established an all-Burmese government (Palmer, 1964: 63).” To the British, therefore, it was imperative that the Indians were sufficiently appeased, or sufficiently divided, to eliminate the risk of the Japanese finding hands to have the gates of India opened from within —not least because even before Japan entered the war, it had been reported that:

 

“Arrangements are in progress for an inter-Imperial conference on war supplies to be held in Delhi … [where] it is expected that the Governments of East Africa, South Africa, Australia, New Zealand, Burma, and Malaya will be represented … to confer … on mutually developing their resources to provide the maximum for self-defence and for Great Britain … India (my emphasis), with her vast and varied resources and her central position, is the natural pivot for such arrangements (The Times, 8 August 1940, page 3).”

 

Small wonder, then, that the premier should have described the proposals which the Mission would be bringing as “a constructive contribution to aid India in the realization of full self-government (The Times, 12 March 1942, page 5).” But whilst a desire to garner Indian support for repelling the Japanese would seem able to explain why the mission was sent at all (as well as when), would that desire have also been sufficient to elicit on its own a public offer of eventual self-government from an imperialist as committed as Winston Churchill? As late as October 1939, in a letter to Jawaharlal Nehru (one of the principal Indian leaders), the non-party Stafford Cripps, who had established quite a good rapport with Nehru (Nehru, 2005: 224-5), would be writing (with reference to the Chamberlain administration) that:

“I recognise that it is expecting a great deal more than is probable to expect this Government to do anything more than make a meaningless gesture. The addition of Winston Churchill [to the Cabinet, as First Lord of the Admiralty] has not added to the friends of Indian freedom, though he does look at matters with a realism that is an advantage (Nehru, 2005: 398).”

 

Realism?

Were the Mission’s proposals a (belated) sign of that ‘realism’ then? Even though just six months earlier, shortly after drawing up with President Roosevelt the Atlantic Charter — a declaration of eight common principles in international relations, one of which was “support for the right of peoples to choose their own form of government (Palmer, 1964: 35)” — Churchill had created “a considerable stir when [he] appeared to deny that the Atlantic Charter could have any reference to India (Low, 1984: 155)”? As it turned out, it was realism on Churchill’s part, but without having anything to do with recognizing Indian aspirations. That is because when Churchill announced the Mission, his intended audience were not the Indians at all — not least because they never needed to be. The indispensability of India to the war effort was indisputable, but there was hardly ever any need for Churchill to appease the Indians in order to save the Raj. Simply consider the ease with which the Government of India, notwithstanding the continuing proximity of Japanese forces to the subcontinent, was able to quell the Congress-launched Quit India Movement of August 1942 — which was even described in a telegram to the premier by the Viceroy, Lord Linlithgow, as “by far the most serious rebellion since that of 1857, the gravity and extent of which we have so far concealed from the world for reasons of military security (Zachariah, 2004: 117).” The quelling anticipated Churchill’s asseveration that “I have not become the King’s First Minister in order to preside over the liquidation of the British Empire. For that task, if ever it were prescribed, someone else would have to be found … (The Times, 11 November 1942, page 8).” When British might in India was still a force to be reckoned with, what consideration(s) could have possibly served to have induced the Mission’s dispatch just five months earlier? What would Churchill not have gained had he never sent it?

 

There are two aspects to that, the second of which also addresses the third of our original questions — namely why the Mission was led by the individual that it was. The first aspect was Churchill’s desire, following the debacle in Singapore, to reassure not just his compatriots but also his indispensable transatlantic allies that something was being done to safeguard resource-rich India from the enemy (Owen, 2002: 78-9). With India “now a crucial theatre of war in the path of Japanese advance, Cripps exploited US pressure to secure Churchill’s reluctant agreement to the ‘Cripps offer’ (Clarke and Toye, 2011).” This was not very surprising, for given that he was president of a country which not only owed her birth to anti-imperialism but had also just subscribed to the Atlantic Charter, Roosevelt could not afford domestically to be seen condoning (British) imperialism anywhere in the world. The American view was that Indian support for fighting Japan would be better secured by conciliation than by repression (The Daily Telegraph, 13 April 1942, page 2), and Roosevelt even sent his personal representative, Colonel Louis Johnson, to India to assist in the negotiations (Clarke and Toye, 2011). Under such circumstances, Churchill could have only confuted the Americans by first making an offer of which Washington approved to the Indians, and then proclaiming the futility thereof after it had been rejected by them (The Daily Telegraph, 1 April 1942, page 3). As he wrote before the Mission’s dispatch to Linlithgow, a fellow reactionary who would do much to sabotage the ‘Cripps offer’ by his (predictable) refusal to reconstruct the Executive Council in accordance with Congress’s wishes (removing thereby any incentive Congress might have had for consenting to postwar Dominion status) (Moore, 2011):

“It would be impossible, owing to unfortunate rumours and publicity, and the general American outlook, to stand on a purely negative attitude and the Cripps Mission is indispensable to proving our honesty of purpose … If that is rejected by the Indian parties … our sincerity will be proved to the world (Zachariah, 2004: 114).”

 

Public relations

As anticipated, this public relations gesture, “an unpalatable political necessity” for the gesturer (Moore, 2011) and therefore proof of his ‘realism’, worked — all the more after Cripps, who considered neither Churchill nor Linlithgow primarily responsible for his failure in India (Owen, 2002: 88), proceeded to “redeem his disappointment in Delhi by a propaganda triumph, aimed particularly at the USA, with the aim of unmasking Gandhi as the cause of failure. One result of the Cripps mission, then, was … [that] influential sections of American opinion swung to a less critical view of British policy. In this respect, Churchill owed a substantial, if largely unacknowledged, debt to Cripps (Clarke and Toye, 2011).” The ulterior motive behind sending the Mission became evident to some even at the time. As Nehru himself reflected after once more landing in gaol (for his participation in the Quit India Movement):

“The abrupt termination of the Cripps’ negotiations and Sir Stafford’s sudden departure came as a surprise. Was it to make this feeble offer, which turned out to be, so far as the present was concerned, a mere repetition of what had been repeatedly said before — was it for this that a member of the British War Cabinet had journeyed to India? Or had all this been done merely as a propaganda stunt for the people of the USA (Nehru, 2004: 515)?”

 

A desire, therefore, to satisfy the Americans, who were his intended audience, would explain why Churchill acquiesced in the Mission. But now we come to the other aspect which was alluded to earlier — namely why it was the Cripps Mission. To begin with, Cripps, a non-party person since his expulsion from the Labour Party in January 1939 for advocating a Popular Front with the communists (Kenyon, 1994: 97), had, shortly after the outbreak of war in September, embarked upon a world tour, convinced that “India, China, Russia, and the USA were the countries of the future (Clarke and Toye, 2011)”, and that it would therefore be worth his country’s while to ascertain their future aims. “In India Cripps was warmly received as the friend of Jawaharlal Nehru … [and] though unofficial in status, Cripps’s visit was undertaken with the cognizance of the India Office and was intended to explore the prospects of an agreed plan for progress towards Indian self-government (Clarke and Toye, 2011).” But whilst this visit helped establish his bona fides with the Indian leaders and gave him such a knowledge of Indian affairs as would later make him a publiclysuitable choice for leading the Mission (The Daily Telegraph, 22 April 1952, page 7), Churchill had more private reasons for choosing Cripps in 1942 — as we shall now see.

 

Going abroad

After becoming prime minister in 1940, “Churchill [had] used foreign postings cannily to remove potential opponents and replace them with supporters; as well as Halifax, Hoare and Malcolm MacDonald (who was sent to Canada as high commissioner), he sent five other Chamberlainite former ministers abroad as the governors of Burma and Bombay, as minister resident in West Africa and as the high commissioners to Australia and South Africa. Several others were removed from the Commons through the time-honored expedient of ennobling them (Roberts, 2019: 622).” Similarly, the left-wing Cripps was also sent out of the country — as ambassador to Moscow, where he served for eighteen months, Churchill contemptuously observing when it was suggested Cripps be relocated that “he is a lunatic in a country of lunatics, and it would be a pity to move him (Roberts, 2019: 622).” To us, this remark shows how the Cripps Mission vis-à-vis India was inherently frivolous; for had Churchill considered the fulfilment of its ostensible aims at all important, would he have entrusted the Mission to a ‘lunatic’ (rather than to, say, Leopold Amery, who was his trusted Indian Secretary, and who had already dissuaded him from going to India himself (Lavin, 2015))?

However, after America entered the war, “Churchill [for reasons irrelevant to this essay] came to think Cripps a bigger menace in Russia than at home and sent permission for him to return to London, which he did in January 1942 … [to be] widely hailed as the man who had brought Russia into the war (Clarke and Toye, 2011)” — this at a time when Churchill himself was grappling with a weakened domestic position (Addison, 2018), which the fall of Singapore would do nothing to improve. Anxious to win over the non-party Cripps, who was now his foremost rival for the premiership (Roberts, 2019: 714), Churchill “brought him into the government as Lord Privy Seal and Leader of the House of Commons (Clarke and Toye, 2011).” Rather than engage in domestic politics, however, Cripps “chose to invest his windfall political capital in an initiative to break the political impasse in India (Clarke and Toye, 2011).” But, “as Churchill may well have calculated in advance, the Mission failed and Cripps’s reputation was diminished (Addison, 2018).” The political threat to Churchill decreased considerably, for failure in India meant that Cripps’s removal as Leader of the House of Commons was “inevitable” (The Times, 22 April 1952, page 6). Who could have aspired to the premiership under such circumstances? The Mission had not even been a gamble for Churchill (who would have never sent Cripps only to add to his political capital), since the offer’s provision, prudently inserted by Amery (Lavin, 2015), for a province’s right to refuse accession to a postwar Indian Dominion was certain to have been welcomed by the Muslim League (India’s foremost Muslim political party) — which had declared its quest for some form of partition as early as March 1940 (with the Lahore Resolution), and the retention of whose support during the war was crucial because the Muslims, “besides being a hundred million strong, [constituted] the main fighting part of the [Indian] Army (Kimball, 1984: 374)” — but equally certain to have been rejected by the Hindu-dominated Congress (which was already irked by the stipulation that Dominion status would be granted only after the war, which nobody at the time could have known would end but three years later). Not for nothing had Churchill privately assured an anxious King George VI shortly after the Mission’s dispatch that “[the situation] is like a three-legged stool. Hindustan, Pakistan, and Princestan. The latter two legs, being minorities, will remain under our rule (Roberts, 2019: 720-1).”

 

Conclusion

To conclude, given his views on both India and Cripps, it is not surprising that the premier should have entertained a paradoxical desire for the Mission to succeed by failing — which it did. By easing American pressure on Downing Street to conciliate the Indians and politically emasculating Stafford Cripps at the same time, the Mission served both of the purposes for which it had been sent so astutely by Prime Minister Churchill.

 

Find that piece of interest? If so, join us for free by clicking here.

 

 

Bibliography

Addison, P. (2018) Sir Winston Leonard Spencer Churchill. Oxford Dictionary of National Biography [Online]. Available at https://doi.org/10.1093/ref:odnb/32413 [Accessed on 20.05.24]

Clarke, P. and Toye, R. (2011) Sir (Richard) Stafford Cripps. Oxford Dictionary of National Biography [Online]. Available at https://doi.org/10.1093/ref:odnb/32630 [Accessed on 20.05.24]

Kenyon, J. (1994) The Wordsworth Dictionary of British History. Wordsworth Editions Limited.

Kimball, W. (1984) Churchill & Roosevelt: the complete correspondence. Volume 1 (Alliance Emerging, October 1933 - November 1942). Princeton University Press.

Lavin, D. (2015) Leopold Charles Maurice Stennett Amery. Oxford Dictionary of National Biography [Online]. Available at https://doi.org/10.1093/ref:odnb/30401 [Accessed on 20.05.24]

Low, D. (1984) The mediator’s moment: Sir Tej Bahadur Sapru and the antecedents to the Cripps Mission to India, 1940-42. The Journal of Imperial and Commonwealth History [Online]. Available at https://doi.org/10.1080/03086538408582664 [Accessed on 20.05.24]

Moore, R. (2011) Victor Alexander John Hope, second marquess of Linlithgow. Oxford Dictionary of National Biography [Online]. Available at https://doi.org/10.1093/ref:odnb/33974 [Accessed on 20.05.24]

Nehru, J. (2004) The Discovery of India. Penguin Books India.

Nehru, J. (2005) A Bunch of Old Letters. Penguin Books India.

Owen, N. (2002) The Cripps mission of 1942: A reinterpretation. The Journal of Imperial and Commonwealth History [Online]. Available at https://doi.org/10.1080/03086530208583134 [Accessed on 20.05.24]

Palmer, A. (1964) A Dictionary of Modern History 1789-1945. Penguin Reference Books.

Roberts, A. (2019) Churchill. Penguin Books.

The Daily Telegraph (1 April 1942, 13 April 1942, 22 April 1952)

The Times (8 August 1940, 12 March 1942, 11 November 1942, 22 April 1952)

Zachariah, B. (2004) Nehru. Routledge Historical Biographies.

Posted
AuthorGeorge Levrier-Jones
2 CommentsPost a comment

Robert F.  Kenned Jr.’s suspension of his third-party campaign for president and endorsement of Republican Donald J. Trump, was a development with historical resonance. RFK, Jr. has long been known as a fiercely independent and idiosyncratic lawyer and environmentalist with an eclectic collection of positions and ideas, including vaccine skepticism. But among his other actions and assertions, RFK, Jr.’s embrace of Trump and, by extension, the Republican party, stands out for its direct opposition to the Democrats, the party of his forefathers who did much to shape its values and lore, and inspire future generations of adherents. RFK, Jr. is now campaigning energetically for Trump, and given the still-potent draw of the legendary Kennedy name, his support could conceivably make the difference in a razor-tight race.

Larry Deblinger explains.

Booby Kennedy (left) with President Lyndon B. Johnson in 1966.

Upon hearing of RFK’s decision, five of his eight surviving siblings released a brief statement condemning it as a “betrayal” of their family’s values and “a sad ending to a sad story.” Previously,  at least 15 Kennedy family members had shunned RFK Jr.’s candidacy and endorsed Joe Biden for president, before Biden dropped out. These relatives appear to view RFK, Jr. as a black sheep of the family, an aberration whose actions should be lamented and dismissed.

It might be tempting to view RFK, Jr’s “sad story” through the operatic lens that the Kennedy family saga has typically been chronicled, replete with tragic and untimely deaths, noble ideals, soaring oratory, and unrealized dreams. Indeed, RFK, Jr. hinted of his move to come on the basis of a high-minded principle, befitting a Kennedy. In April, RFK, Jr. asserted on CNN that President Joe Biden was a greater threat to American democracy than Trump, even though he called Trump’s attempts to subvert the 2020 election and other of his actions “appalling.” He argued that social media websites had blocked him from espousing his vaccine conspiracy theories under pressure and weaponization of government agencies by the Biden administration, thus violating his Constitutional right to freedom of speech, and threatening the most important pillar of democracy.

But it serves to note that RFK, Jr’s complaint was also of a direct and personal nature. And in this context, it must also be considered that among their traits of good looks and charisma, drive, wit, brilliance, eloquence, and idealism, prominent Kennedys have shown a capacity to act out of sheer spite: personal, petty, mean-spirited, and hateful vindictiveness. Both RFK, Jr.’s father, Robert F. “Bobby” Kennedy, and his uncle Edward M. “Ted” Kennedy, evinced this marked tendency in the political arena at key moments in American history. Through this lens, RFK, Jr.’s action appears not so much a “betrayal” of Kennedy family value as another familiar recurrence of a Kennedy failing, and his allegiance with Trump, little more than a personal and vindictive swipe against the Democratic party.

 

Youth

From his youth, Bobby Kennedy was a kind of family attack dog, keen to perceive and avenge any slights to himself or his family members. The “runt” of Rose and Joseph Kennedy’s storied litter, Bobby made up for his small size and limited talents (at least compared with his brothers) with tenacity and scrappiness in sports and academics, often spoiling for fights. It did not take much; as a student at Harvard, RFK once smashed a beer bottle over a young man’s head, sending him to the hospital for stitches, simply because he had the temerity to celebrate his birthday at the same Cambridge bar and same time as Bobby.1  And he held a grudge. “When Bobby hates you, you stay hated,” Joe Kennedy once said of the son who seemed most to take after him.2 As an adult, armed with a law degree from the University of Virginia, RFK became an assistant counsel to US Republican Senator Joseph V. McCarthy’s infamous investigative committee that during 1953-54 recklessly and often spuriously alleged Communist influence in the US government and media.

It was during this period that RFK first met then-Senate Majority Leader, Lyndon Johnson, a Democrat from Texas, and for Bobby, it was hatred at first sight. He had known of Johnson as a protégé of former President Franklin D. Roosevelt, the man who had recalled his father as US Ambassador to England in 1940 and fired him; Johnson was at FDR’s side during much of the humiliating process, and that, apparently, was enough for Bobby.3  FDR had clear and substantive reasons for his action, including Joe Kennedy’s early support for appeasement of Adolf Hitler in the late 1930s; publicly expressed pessimism over the survival of Great Britain and of democracy in Europe (and privately expressed antisemitism); suspicion of his being a Nazi sympathizer; and British Prime Minister Winston Churchill’s calls for Kennedy’s dismissal. Nonetheless, son Bobby saw the firing as a family offense not to be forgiven.

So, when Majority Leader Johnson entered the Senate cafeteria with two assistants one day in 1953 and passed a table where McCarthy was meeting with his staff, Bobby sat glowering in his seat while the rest of McCarthy’s team jumped up to shake the hand of the “Leader,” in keeping with Senate decorum. 3 Not to be deterred, the towering, almost 6 foot 4-inch tall, LBJ stood over RFK and stuck out his hand, waiting for a long, awkward moment before Bobby finally rose and shook it without looking at Johnson.

 

Feud

The epic LBJ-RFK feud was on. There were Johnson’s repeated attempts after the first to squeeze handshakes out of Bobby Kennedy just to torment him, and a few disparaging comments from Johnson about Joe Kennedy’s ambassadorship in England. There was the incident in 1959 on Johnson’s ranch, where RFK was sent by his brother John to sound out Johnson on his intentions of running for president, when LBJ insisted on some deer hunting and Bobby was thrown flat on his back by a rifle recoil. “Son, you’ve got to learn how to handle a gun like a man,” Johnson said as he helped him up.4  

 

Beyond the insults, RFK despised Johnson as a man who in his opinion exhibited all the worst traits of the classic politician: an unprincipled and conniving lust for power, loose regard for the truth, rampant egoism, and selfish vanity. To RFK’s Northeastern elite sensibilities, Johnson’s rude and crude Southwestern-dirt-poor, working-class manners, physically overbearing political style, and segregationist past were repugnant and worthy of withering scorn, something Johnson fully recognized and resented.

But the true measure of RFK’s pettiness emerged with the ascendance of LBJ to Vice President in his brother John’s administration, and to the presidency after his brother’s death: an inability to respect the office however much he detested the man. Even though JFK had offered LBJ the VP post, considering him vital to his electoral prospects, and LBJ had accepted, during the Democratic convention, Bobby repeatedly visited Johnson in his hotel room to get him to decline the offer. RFK later insisted his attempts were at his brother’s behest, a contention that historians view with skepticism.5,6 It was during this episode that Johnson began calling RFK “that little shitass” and “worse” names, according to a close associate.7

The ill-will continued through JFK’s tragically shortened presidency, under which RFK served as Attorney General. JFK knew that the vice presidency was an extremely confining office for an accomplished power broker like Johnson, and he was determined that LBJ be treated with dignity, if only to assuage his massive ego. In general, JFK and Johnson enjoyed cordial, gentlemanly, and mutually respectful relations.8,9 Yet, RFK radiated disrespect towards Johnson, barging into his meetings without a word of apology and treating him like an underling9; indeed, for all practical purposes, Bobby was the number two in the JFK administration. The tight-knit Kennedy staffers called LBJ nicknames like “Uncle Cornpone” behind his back.10

 

Out of Office

It was even worse out of the office. Bobby and his wife Ethel held frequent parties for “Kennedy people” (Johnson called them “the Harvards”) at their home, Hickory Hill in Virginia, where the ridicule of LBJ turned kind of sick, according to historian Jeff Shesol in his 1997 book on the RFK-LBJ feud, Mutual Contempt:

Johnson jokes and Johnson stories were as inexhaustible as they were merciless. Those that percolated during the campaign had been humorous, but this new material betrayed a real bitterness, a mean-spiritedness that was hard to explain…Time (magazine)’s Hugh Sidey, a frequent visitor to Hickory Hill was appalled by the gang’s ridicule of LBJ, which he described as “just awful…inexcusable, really.” In October 1963, friends gave Bobby Kennedy an LBJ voodoo doll; “the merriment,” Sidey later reported, “was overwhelming.”11

 

 

The frivolity likely vanished after the assassination of JFK in Dallas, Texas, but not the feud between RFK and LBJ, exacerbated by the fact that the shooting occurred in Johnson’s home state. RFK, overwhelmed with grief, resolved to stay on as Attorney General, but without letting go of his animosity. “From the moment Air Force One (bearing JFK’s body) landed in Washington, and progressively in the days and weeks that followed, Bobby was ready to see slights to his brother, his brother’s widow, or himself in whatever Lyndon Johnson did or didn’t do,” wrote LBJ biographer Merle Miller.12         

Although Johnson performed faithfully and admirably in honoring JFK’s legacy and advancing his policy agenda, according to contemporary journalists and historians, his personal attempts as President to show respect and sensitivity to the Kennedys were all rudely rebuffed. “Overtures from Johnson to the Kennedy family after the Kennedy assassination were rejected in a manner that was thoroughly offensive and insulting,” observed contemporary Clark Clifford, an eminent Washington DC attorney and veteran Democratic party insider.12

And the hostility did not stop at mere personal gestures.  As historian Shesol explains of Johnson’s early days as president:

Johnson desperately needed affirmation, and in the hour of his greatest burden, it came from unlikely sources—from the Congress, which had spurned and mocked him for a thousand days; from the cabinet, appointed by his predecessor; from the American people, who cherished John Kennedy in death as they had not in life. All rallied to the new president. They gave him their patience and their trust.

Bobby Kennedy was not among them, and in Bobby’s absence Johnson felt the suspicion and rejection he feared from the rest.13  

           

Ironically, a book that the Kennedy family members had commissioned expressly to control the narrative of the JFK assassination and aftermath, and protect their image, publicly exposed the intense antagonism towards LBJ, which shocked reviewers. Entitled “The Death of a President,” by William Manchester, who was given extensive and exclusive access to the Kennedys and their records, the book was, in the words of Time magazine, “seriously flawed by the fact that its partisan portrayal of Lyndon Johnson is so hostile that it almost demeans the office itself.” It is impossible to parse exactly what proportion of this hostility might have come independently from the author, rather than the Kennedys (although the author was handpicked and vetted by the family). At any rate, the Kennedys were unhappy with the book for various reasons and sued to stop general publication of it before changes were made. “Bobby worried that the book might make it appear that the Kennedys had not given Johnson a chance to succeed in the Presidency and that their opposition was nothing more than a personal vendetta,” wrote Michael W. Schuyler, an historian at Kearny State University, New Mexico.14

           

Bobby Kennedy

LBJ went on to win election in his own right in 1964, by one of the largest landslide victories in American history. He then successfully pushed through epochal Civil Rights legislation and social welfare programs like Medicare and Medicaid, anti-poverty initiatives and other legislation ranging from the arts to immigration, environmental protection, education, and gun control, compiling a domestic record that, on the whole, remains a landmark achievement of American progressivism. But his controversial and disastrous Vietnam war policies rapidly undermined his presidency, compelled him to decline to run for re-election, and ended his political career. Bobby Kennedy left the Johnson administration to run for US Senator from New York, which office he won in 1965. He was assassinated while campaigning for president on an anti-war platform in 1968.                                            

It might be reassuring, in terms of the Kennedy legacy, to think that the LBJ-RFK feud was entirely a one-off, generated by the forced proximity and interaction of two dynamic personalities who were almost uniquely born to clash. But that is not the case. A mere 12 years after Bobby’s violent death, a relatively brief but all-too-familiar spectacle of petty and personal spite and resentment involving a Kennedy took center stage in American politics.

 

1980 convention

The setting was the Democratic party convention of 1980, a presidential election year. The intraparty combatants were the incumbent US president James Earl Carter, son of a peanut farmer from Georgia and Ted Kennedy, US Senator from Massachusetts, scion of the wealthy, celebrated, star-crossed political family, which some Americans viewed like royals in exile. Although Carter had won the party’s nomination handily after a bitter battle, he stood awkwardly at the podium, having completed his acceptance speech, waiting for Kennedy to arrive and, in effect, certify his candidacy as though he were a higher authority.

The contest itself was inherently anomalous, and humiliating for Carter. “Never before had a sitting President, an elected President, with command of both houses of Congress and the party machinery, been so challenged by his own people. What was even more remarkable was the nature of the challenge—a charge of incompetence,” wrote contemporary journalist and historian Teddy White.15

By 1980, Carter’s presidency was foundering, beset on all sides by crises foreign and domestic. The economy was struggling with the combination of persistent inflation, slow economic growth, and high unemployment, called “stagflation.” A revolution in Iran to replace the US-backed Shah with an Islamic theocracy in 1979 spooked Americans who remembered the Arab oil embargo of the early 1970s, and drove them to hoard gas. This resulted in long gas lines, dwindling gas supplies, and mounting hysteria, including killings and riots. The infamous Iran hostage crisis of 1979 erupted when Iranian militants captured over 50 Americans at the US embassy in Tehran and kept them for 444 days, prompted at least in part by Carter’s decision to allow the exiled Shah to enter the US for cancer  treatment.

 

 

Ted Kennedy

Despite some landmark achievements such as his forging of the Camp David Peace Accords between Israel and Egypt, Carter failed to convince the American people that he had a sure grip on the helm of state. He had a curiously stiff personal style, despite his ever-present wide smile, and a technician’s approach to solving national problems that was uninspiring to the public and did not always work. Like Carter, his closest advisers were from Georgia, and the team, including the President, came to office with a regional chip on their shoulders, bristling with peevish hyperawareness, if not combative pride, in being outsiders to the Washington establishment. As Carter’s approval ratings began to plunge, sinking to 28% in June of 1979, a bit of that Southern defiance appeared to flare when Carter was asked at a gathering of Congressmen whether he planned to run for re-election (a question insulting in itself), particularly given the possibility that Ted Kennedy might challenge him for his party’s nomination.

“I’m going to whip his ass,” Carter replied, referring to Kennedy, and then repeated it, when asked (in disbelief) if that was what he meant.16 When confronted with the widely reported statement, Kennedy smoothly responded that the president must have been misquoted.

It was the first publicly overt expression of tension between Carter and Kennedy. Later in 1979, further signs of tension and rivalry were palpable at the opening of the John F. Kennedy Library,  in Boston, where they both spoke. The event started out inauspiciously for Carter when he leaned in to kiss Jacqueline Kennedy Onassis on the cheek in greeting, “just as a matter of courtesy,” and “she flinched away ostentatiously,” as Carter remembered decades later.17In their speeches, ostensibly in honor of JFK., both Carter and Kennedy slyly inserted warnings, or shots across the bow, to each other.

Observing with growing disgust Carter’s faltering efforts to be the president the American people wanted and needed, Kennedy became convinced that he could fill the void of leadership, and announced his candidacy for the Democratic nomination.

 

Contest

But the matchup was a contest of weaknesses. While Carter had acquired the image of a bumbler, Kennedy was a deeply flawed and inept candidate. Grave doubts about his character relentlessly shadowed him over the 1969 incident in Chappaquiddick, Massachusetts, an island off Martha’s Vinyard, when he drove a car off a bridge and into a pond, causing the death of Mary Jo Kopechne, a young woman who was a passenger in the car. Although Kennedy swam to safety, he failed to call the police for 10 hours during which Kopechne’s life might have been saved. Kennedy further undermined himself with a one-on-one interview on prime-time, network television, in which he was unable to answer the direct question of why he wanted to be president, responding  with an incoherent stream of hesitations and pointless phrases, i.e. an epic word salad. Mirroring this ambivalence, Kennedy campaigned with inconsistent energy and conviction, championing an old-line liberalism that many thought outdated.

By a month before the convention, Carter had won enough primaries and delegates to secure his renomination, with a commanding lead over Kennedy; as promised, Carter had “whipped” Kennedy. And yet, Kennedy refused to bow out, having adopted a “kamikaze-like state of mind,” according to Jon Ward in his 2019 book about the Carter-Kennedy rivalry, Camelot’s End. “Many in the Kennedy camp were disgusted by Carter,” wrote Ward. “They felt he was no better than (Republican presidential nominee Ronald) Reagan, and almost preferred to see Reagan win,”18

The Kennedy camp insisted on an “open convention,” meaning that delegates could be free to vote for whom they wished regardless of the choice of the rank-and-file primary voters they were supposedly pledged to represent. In the meantime, a poll showed Carter with a 77% national disapproval rating.19 The Democrats agreed to the open convention format.

When the open vote was over, Carter had finally won the nomination with almost two-thirds of the vote. Kennedy conceded but he was not done fighting. His camp insisted on a party platform vote, including liberal planks far to the left of Carter’s policies, which would defy and embarrass the President, and would take place right after Senator Kennedy was scheduled to speak, so as to set the most favorable atmosphere for their approval.

The Carter people knew exactly what was planned and were losing patience. “If you have any wisdom and judgment at all, you know you don’t get carried away by personalities and pettiness in a political fight,” recounted Carter’s campaign  manager, Bob Strauss, to The New Yorker. “Politics is tough enough…that you don’t cut each other’s throats.” Carter’s Press Secretary, Jody Powell, later wrote, “We neglected to take into account one of the most obvious facets of Kennedy’s character, an almost child-like self-centeredness,” in his memoir of the election

 

Kennedy’s speech        

In the event, Kennedy’s convention staff did behave childishly, like a bunch of drunken frat-boys, on the day of his speech. Kennedy floor manager Harold Ickes invoked an obscure procedural rule to stop the afternoon convention activities, “in a gesture done purely out of spite,” wrote Ward in a 2024 Politico article.  “We just said, ‘Fuck ‘em,’” explained Ickes in an interview. “I mean, we weren’t thinking about the country. We weren’t even thinking about the general election. It was, ‘Fuck ‘em.’ You know? To be blunt about it.”

Fistfights almost broke out the convention floor when outraged Carter staffers confronted Ickes, who responded with “Go fuck yourself, I’m shutting this convention down.” The fisticuffs were luckily averted by a phone call from Kennedy at his hotel room, curious to know what had stopped the proceedings he was watching on television. When told the convention would be stalled for two hours, Kennedy, after a long pause, told Ickes to allow it to go forward.

Perhaps relieved from the burden of pursuing a losing cause, Kennedy gave a thoughtful, eloquent, stem-winding speech later that night, which is still remembered as one of the best speeches in American political convention history. Kennedy invoked the Democratic party’s heritage of support for the common man, and the wisdom of 19th century poet Alfred Lord Tennyson, with pleas to re-unite the country and the party, lyrically concluding, in a paean to big-hearted, big-spending liberalism, "the work goes on, the cause endures, the hope still lives, and the dream shall never die."

And yet, the good vibes and elevated, Camelot-like aura were shattered by another Kennedy-driven spectacle before a prime-time national TV audience on the last, climactic night of the convention. Carter did not help his cause by starting off his acceptance and campaign kick-off speech with a shouted tribute to Democratic Senator and former VP, Hubert Horatio Humphrey, whom he misnamed as Hubert Horatio Hornblower (Horatio Hornblower was a fictional, Napoleonic-era, British naval officer in a popular 20th century series of stories and novels) before hastily correcting himself. When he had finished his speech, almost 20 minutes ticked by as various party luminaries (and some not so luminary) joined him on the stage for a desultory show of unity, waiting for the final moment, and leaving bored TV news commentators to mutter derisive comments to their audiences.

The Kennedy team had orchestrated that final moment by insisting that Kennedy would not watch the speech at the arena but in his hotel room, and would then make his way to the convention, thus having the dramatically delayed, final appearance of the show, like the top star of a rock concert, or a champion boxer.  

 

Handshake

When Kennedy did appear, to a roar of excitement, it was obvious to almost everyone watching, or made clear to them by the TV journalists on the scene, that Carter was looking for one thing: the classic political handshake of the party’s top politicians, former rivals, standing together in full view of the spotlights and cameras, their interlocked hands thrust high in the air, in a thrilling and triumphant show of unity, strength, and expectation of victory, of party over personal interest, bitterness, and division. He never got it. Kennedy did shake Carter’s hand five times by Ward’s count, but each time in a crowd, with the brief and perfunctory manner a campaigner might take the hand of someone in a rope line. The TV commentators duly noted each, increasingly embarrassing, failure. As Carter followed him around, Kennedy began to “smirk” and “chuckle,” according to Ward; he finally patted Carter on the back before leaving the arena to cheers.

Two months later, a peripheral, nonofficial member of the Kennedy campaign staff, but with longstanding ties as a helper to the Kennedy family, named Paul Corbin, stole Carter’s briefing books for a general election debate with Reagan and gave them to the Reagan campaign, 20 according to information gathered in a Congressional  investigation and a 2009 book by political consultant and author Craig Shirley.20

 

After the 1980 election

Carter went on to lose the election to Reagan, but thereafter has led one of the most active, productive, and distinguished post-presidential lives in  American history.  Ted Kennedy, who died in 2009, remained US Senator from Massachusetts for decades, compiling a highly distinguished legislative career, featuring his steadfast advocacy for a national health care system, which was finally realized in at least some form in 2010, under the Obama administration.

With regard to health care reform, however, Carter has charged in his presidential memoirs that his administration’s proposal for a national health plan, which was devised over a two-year period by an array of economic experts and government leaders, including Ted Kennedy, and had support from key Congressional leaders, was scuttled by Kennedy in 1979 when he opposed it “at the very end,” which ultimately resulted in a 30-year delay in national health care.21  Carter repeated the charge in 2010 in TV interviews with 60 Minutes and Larry King, alleging that Kennedy acted “out of personal spite,” and his ambition to run for president and enact his own health care plan. In his own writings, Kennedy had counter-charged that it was Carter who delayed the plan (https://www.cbsnews.com/news/time-has-not-cooled-jimmy-carter-ted-kennedy-feud/).

 

RFK, Jr.

And so, we arrive at RFK, Jr., son of Bobby and nephew of Ted, choosing to support Republican Donald J. Trump, a convicted felon facing dozens of additional criminal charges, in his campaign for re-election as president. RFK, Jr. appears to justify his stance at least partly as a defense of freedom of speech. But he has yet to explain how supporting a candidate whose relentless abuse and corruption of that very right, by knowingly spewing lies that have sown chaos, threatened the democratic system, and endangered public safety, could possibly serve to protect freedom of speech and democracy. Then again, RFK, Jr.’s stance might have little to do with anything so grand as ideas, principles, and the national interest. After all, he is a Kennedy.  

 

Find that piece of interest? If so, join us for free by clicking here.

 

 

 

Print References

1.     Caro, Robert A. (2012). The Years of Lyndon Johnson. The Passage of Power. Alfred A. Knopf; New York, NY: pg. 63.

2.     Ibid, pg. 66.

3.     Ibid, ppg. 61-3.

4.     Shesol, Jeff. (1997). Mutual Contempt. Lyndon Johnson, Robert Kennedy, and the Feud that Defined a Decade. W.W Norton & Company; New York, NY: pg.10.

5.     Caro, Robert A. (2012). The Years of Lyndon Johnson. The Passage of Power. Alfred A. Knopf; New York, NY: ppg. 122-40.

6.     Shesol, Jeff. (1997). Mutual Contempt. Lyndon Johnson, Robert Kennedy, and the Feud that Defined a Decade. W.W Norton & Company; New York, NY: ppg.48-57.

7.     Caro, Robert A. (2012). The Years of Lyndon Johnson. The Passage of Power. Alfred A. Knopf; New York, NY: pg. 139.

8.     Ibid., pg. 177-195.

9.     Shesol, Jeff. (1997). Mutual Contempt. Lyndon Johnson, Robert Kennedy, and the Feud that Defined a Decade. W.W Norton & Company; New York, NY: ppg.77-79.

10.  Caro, Robert A. (2012). The Years of Lyndon Johnson. The Passage of Power. Alfred A. Knopf; New York, NY: pg. 198.

11.  Shesol, Jeff. (1997). Mutual Contempt. Lyndon Johnson, Robert Kennedy, and the Feud that Defined a Decade. W.W Norton & Company; New York, NY: pg.104.

12.  Ball, Moira Ann. The phantom of the oval office:  The John F. Kennedy’s assassination’s symbolic impact on Lyndon B. Johnson, his key advisors, and the Vietnam decision-making process.  Presidential Studies Quarterly. 1994;24(1):105-119.

13.  Shesol, Jeff. (1997). Mutual Contempt. Lyndon Johnson, Robert Kennedy, and the Feud that Defined a Decade. W.W Norton & Company; New York, NY: pg.119.

14.  Schuyler M.W. Ghosts in the White House: LBJ, RFK, and the assassination of JFK. Presidential Studies Quarterly. 1987; 17(3):503-518.

15.  Ward J. (2019).  Camelot’s End. Kennedy vs. Carter and the Fight that Broke the Democratic Party.  Hachette Book Group;  New York, NY: pg. 146.

16.  Ibid., pg. 126.

17.  Ibid., pg. 152.

18.  Ibid., pg.230.

19.  Ibid., pg.251.

20.  Ibid.,  pg.284-5.

21.  Carter J. (2010). White House Diary. Farrar, Strous and Giroux. New York, NY: pg. 325.

The British Labour Party won the 2024 British general election. With that in mind, Vittorio Trevitt looks at the past Labour governments of Clement Attlee and Harold Wilson. He considers how these governments handled welfare policy.

Clement Attlee with John F. Kennedy in 1961.

The UK General Election held in July 2024 was a truly historic event, with Labour returning to office after more than a decade in opposition. The fact that Labour did so with such a massive majority means that they have a strong mandate to transform Britain into a fairer nation. Although the state of public finances has resulted in Labour removing universal Winter Fuel Allowances for most pensioners (ironically reversing a policy implemented under the Blair Government in 1997) and more cuts likely to follow, it is highly probable that as economic conditions improve there will be greater leeway for Labour to expand social provisions, such as fulfilling its proposals for extending rights to statutory sick pay and introducing free breakfast clubs in all English primary schools. In the past, Labour has encountered severe financial difficulties but has still managed to establish a broad array of social security grants that have done much to ameliorate the quality of life for ordinary households. Two former Labour administrations that Starmer and his ministers can look to for guidance are the Attlee Government of 1945-51 and the 1964-70 and 1974-76 Wilson Governments.

 

Attlee Government

The Labour Government that came to power in the first election following the end of the Second World War has long been held in high esteem not only by historians but also by Labour Party activists and politicians. Led by veteran Labourite Clement Attlee, it was by far the most radical and successful that Britain had experienced by that time. Although the country Labour inherited was in a parlous financial state (the legacy of World War II), Attlee and his ministers would not disappoint an electorate hungry for change after years of strife and sacrifice. Over the next 6 years, they drastically changed Britain for the better. One way it achieved this was through the construction of a comprehensive and universalistic welfare model. Although Britain had a long history of welfare provision, the Attlee Government greatly built on the existing framework by setting up a system that covered all citizens. One of the pillars of this new solidaristic edifice, the National Health Service, was notable in making free access to every form of healthcare (such as medical and dental care, eyeglasses and hearing aids) a right for every citizen; one that the service has continued to uphold despite frequent cuts and overhauls in the decades since its “birth.” The 1946 National Insurance Act set up a broad network of cash payments incorporating a range of risks such as old age, widowhood and funeral costs. Apart from the normal rates, increases could be made in national insurance paymentsfor particular cases. Also, where employers had failed to meet the contribution requirements of the Act, resulting in recipients losing partly or entirely the maternity, sickness or unemployment benefits that were theirs by right, such individuals could retrieve a civil debt from said employers representing the lost amounts. A Five Year Benefit Reviewwas also included, aimed at ensuring the adequacy of allowances in helping beneficiaries to meet their basic needs. Additionally, groups such as trade unions were enabled to set up their own schemes if they so wished.

The equally far-reaching Industrial Injuries Act passed that same year bestowed various cash grants upon workers suffering from work-related injuries such as disablement gratuities and special hardship allowances (aimed at workers unable to carry out their current lines of work or equivalent due to their injuries). Five distinct benefits were also made for dependents of workers who tragically lost their lives, while allowances were given in cases of approved hospital treatment, constant attendance and unemployability; the latter geared towards disability pensioners unable to take on any form of employment. In addition, the National Assistance Act introduced two years later established a non-contributory social safety net for those in need; providing support such as shelter and nutritional assistance.

 

The Attlee years also witnessed the passage of other welfare measures affecting different strata of British society.Dockworkers became entitled to pay in cases of unemployment or underemployment, while a state scheme for mature university students was set up. Regulations provided numerous pension entitlements for NHS employees while the National Insurance and Civil Service (Superannuation) Rules, 1948 provided for preserved pension rights with a compensation award in cases where individuals experienced the impairment or loss of opportunity to earn a further pension. As a means of helping people reach their potential, a special scheme was instituted in 1947 whereby individuals with a gift for skilled crafts became eligible for grants to undertake training in other locations if no suitable facilities existed near where they lived.

In 1946, certain pensioners with disabilities that added to wear and tear became entitled to a new clothing allowance, while a couple of years later greater eligibility for special education allowances for children was introduced. The 1948 Local Government Act generalised various powers to pay subsistence and travelling allowances to members of local authorities while also providing payments in cases where council business attendance led to financial loss. The 1947 Agriculture Act incorporated several forms of compensation, such as for disturbance and improvement, while the 1948 Criminal Justice Act provided for the enforcement of payments of compensation or damages. Under the 1948 Children Act, local authorities were empowered to care for children who were orphaned, deserted or unable to be looked after by their parents due to circumstance. Amongst its many provisions included accommodation for children reaching 3 years of age, along with grants for students to help them with the costs of maintenance, training or education. A year later, a system of legal assistance was inaugurated that entitled most people to free legal support in both civil and criminal cases.

 

Impact of the Attlee Government

The extent to which the social security legislation of the Attlee Government dramatically improved people’s lives can be gauged from a poverty study conducted in York in 1950 by the legendary researcher and humanitarian Seebohm Rowntree; using that location as a representative sample. A follow up to a previous survey carried out in the same area in 1936, it estimated that the percentage of working-class people in York who lived in poverty stood at 2.77% in 1950, compared with 31.1% 14 years earlier. Although the study undoubtedly overestimated the extent to which poverty fell during that period, it nevertheless highlights the fact that the Welfare State established under Attlee did much to diminish the numbers experiencing hardship. G.R. Lavers, who co-authored the report, argued that the largest improvement since 1936 had come about as a result of the welfare reforms instituted since 1945, going as far as to claim that the Welfare State had greatly overcome poverty. This assertion gave Labour a positive message to convey to the public during the 1951 election campaign, but despite their efforts would be voted out of office; not returning to power until 1964 under the leadership of former minister Harold Wilson.

 

Wilson Governments

Like Attlee’s Administration, Harold Wilson and his ministers inherited a nation in a difficult economic position; one that eventually resulted in the currency being devalued. This culminated in detestable austerity policies including higher charges for school meals. Also, In a dubious move, one that undoubtedly reflected exaggerated perceptions of welfare fraud that persist to this day, a “four-week rule” was instituted in July 1968 in certain places. This involved social assistance benefits being removed from recipients after this time if it was believed that there was suitable work available. Assessing the impact of this measure, one study edited by the anti-poverty activist and future cabinet minister Frank Field provided the estimate that 10% of those affected by the rule subsequently ventured into crime as a consequence of their losing their benefits. Despite a ministerial claim that this policy had been a success in tackling benefit fraud, Field’s study suggests that it was a misguided decision that caused unnecessary hardship.

Nevertheless, for most of its period in office Labour not only boosted public spending but also rolled out a programme of radical welfare reform that did much to lessen inequality. New benefits were introduced concerning risks that previously had been left uncovered by the Attlee welfare laws. Redundancy pay was set up, along with income supplements for beneficiaries such as unwell, injured and jobless persons; the latter to lessen the impact of unemployment for skilled employees. New allowances for partially incapacitated men were also established, with increased amounts were permitted in certain cases.

The 1965 Solicitors Act allowed for grants to be paid in hardship situations, while other laws introduced varying forms of compensation for those affected by compulsory land purchases and damage. National Assistance was superseded by a new Supplementary Benefits Scheme; an overhaul carried out partly to prevent detailed individual enquiries. Reflecting this philosophical shift, changes were made, for instance, to rent allowance payments for non-householders (previously, these had been dependent upon a household’s make-up). Although not without its faults, it was a definite improvement over the previous social assistance arrangements. Higher benefit rates were provided and, although the allowances under the new scheme were mostly the same as under National Assistance (with exceptions such as an additional allowance for long-term claimants), what differed was the fact that the new scheme sought to ensure that benefits would be given as a right to those who met the means-tested conditions, while seniors were entitled to an income guarantee. Measures were also carried out with the intention of enabling widows and women whose marriages had dissolved to receive higher pensions, while regulations established improved levels of financial assistance for disabled people (such as an allowance for severe disabilities), and allowed for Christmas bonuses to be disregarded in the estimation or calculation of earnings when determining national insurance payments. Local tax rebates were created to assist less well-off ratepayers, and the 1965 Matrimonial Causes Act was designed to helpwomen by means of ordering alimony and other forms of payment to the concerned parties. Additionally, measures were undertaken to tackle homelessness and deliver residential services to persons who are ill and living with disabilities.

 

The social security record of Wilson’s first administration can be justified by the impact its policies had on those living on low incomes. In 1970, the amount that benefits and taxes added to the incomes of those earning £315 annually was more than twice the equivalent amount from 1964. Measurements have also suggested that the number of individuals living in poverty was far lower in 1970 than in 1964; further justification of Labour’s welfare record from the Sixties. One such benchmark, utilising a 1970-based absolute poverty line, has suggested that the percentage of poor Britons fell from around 20% to around 15% by the end of Wilson’s first premiership. Wilson’s last government from 1974 to 1976 would also see further landmarks in social security, with various laws passed that established new entitlements including invalidity pensions and mobility and invalidity care allowances for the disabled, earnings-related pensions, and Child Benefit; a universal payment which for the first time included financial support to families with at least one child and enhanced the amount of assistance allocated to low-income families.

 

The administrations in context

In a way, both administrations reflected the spirit of the times that they governed in. In the decade or so following the end of the hostilities, several war-torn nations in Europe came under the leadership of left-wing coalitions that expanded their social aid systems, while even poorer nations led by progressives including Burma (Myanmar), Guatemala, Iran and Ceylon (Sri Lanka) undertook reforms in this field. Similarly, during Wilson’s first stint as prime minister several developing nations led by reformers throughout the Sixties like India, Turkey, Honduras and the Philippines also embarked upon their own programmes of welfare innovation. The revolutionary social security reforms implemented under Attlee and Wilson therefore reflected broader geopolitical trends during their incumbencies.

The record of the Attlee and Wilson administrations shows that even under dire economic circumstances there is much that can be achieved in strengthening the social security structure that has done much throughout the decades to prevent and mitigate poverty in the United Kingdom. Like their forebears, the Starmer Government must never lose sight of Labour’s goal to make Britain a nation free of injustice. A more generous welfare system is a prerequisite to this. Although it is likely that it will take time until the financial situation improves to the point that Labour will be able to pursue looser, more expansionary fiscal measures to attain its reformist vision, the Starmer Government must nevertheless reinforce the Welfare State as an effective tool against the scourges of poverty, as most Labour governments have done so in the past. The welfare records of the Attlee and Wilson ministries are ones that the new Labour administration can learn greatly from today.

 

 The site has been offering a wide variety of high-quality, free history content since 2012. If you’d like to say ‘thank you’ and help us with site running costs, please consider donating here.

Posted
AuthorGeorge Levrier-Jones

Every October for the past 52 years, the International Hot Air Balloon Festival takes place in Albuquerque, New Mexico. This event is the world's largest hot air balloon festival, with over 500 hot air balloons and nearly 1 million people in attendance. The hot air balloon made its first American flight in 1793, yet it still captures our attention and imagination. So, what is the history behind these magnificent flying balloons?

Angie Grandstaff explains.

A depiction of an early balloon flight in Annonay, France in 1783.

The Origins of Hot Air Balloons

The idea of flying is something that humans have fantasized about for centuries. Many have theorized about how this could happen. English philosopher Roger Bacon hypothesized in the 13th century that man could fly if attached to a large hollow ball of copper filled with liquid fire or air. Many dreamed of similar ideas, but it wasn’t until 1783 that the dream became a reality.

French brothers Joseph-Michel and Jacques-Etienne Montgolfier were paper manufacturers who observed that a paper bag would rise if hot air was put inside it. Many successful experiments proved their theory. The Montgolfier brothers were to demonstrate their flying balloon to King Louis XVI in September 1783. They enlisted the help of a famous wallpaper manufacturer, Jean Baptiste Réveillon, to help with the balloon design. The balloon was made of taffeta and coated with alum for fireproofing. It was 30 feet in diameter and decorated with zodiac signs and suns in honor of the King.

A crowd of 130,000 people, including King Louis XVI and Marie Antoinette, watched the Montgolfier brothers place a sheep, rooster, and duck in a basket beneath the balloon. The balloon floated for two miles and was safely returned to the ground with the animals unharmed. This successful flight showed what was possible, and they began planning a manned trip into the sky.

There was much concern about what the high altitude may do to a human, so King Louis XVI offered a condemned prisoner to be the first to fly. But Jean-Francois Pilatre de Rozier, a chemistry and physics teacher, asked and was granted the opportunity to be the first. The Montgolfier brothers sent de Rozier into the sky on several occasions. Benjamin Franklin, the Ambassador to France at the time, witnessed their November 1783 flight. Franklin wrote home about what he saw, bringing the idea of hot air balloons to American visionaries.

 

An American Over the English Channel

Advances were being made with different fabrics and gases, including hydrogen, to keep the balloon aloft. Many brave individuals were heading into the skies. Boston-born Dr. John Jeffries was eager to fly. Jeffries offered to fund French inventor Jean-Pierre Blanchard’s hot air balloon expedition to cross the English Channel if he was allowed a seat. Dr. Jeffries was a medical man interested in meteorology, so this trip into the clouds fascinated him.

The two men headed into the air from the cliffs of Dover, England in January 1785. Blanchard’s gear and a boat-shaped gondola carrying him and Jeffries weighed down the hydrogen-filled balloon. The balloon struggled with the weight as it headed across the channel, so much so that they had to throw everything overboard. Their desperation to stay in the air even led them to throw the clothes on their backs overboard. The pair landed safely in France minus their trousers but were greeted by locals who thankfully clothed them.

 

First Flight in America

Blanchard’s groundbreaking achievements in Europe brought him to America in 1793. He offered tickets to watch the first manned, untethered hot air balloon flight. The first flight was launched from the Walnut Street Prison yard in Philadelphia. George Washington was in attendance with other future presidents, such as Thomas Jefferson, John Adams, James Madison, and James Monroe. Blanchard, who did not speak English, was given a passport by Washington to ensure safe passage wherever he landed. Blanchard ascended 5,800 feet into the air and landed 15 miles away in Deptford, New Jersey. 

Europe dominated the field of aeronautics, but Blanchard’s first American flight demonstrated the possibilities of flight to America and its leaders. It inspired American inventors and explorers to take to the skies. It was a significant step in the global progress of aviation. An interesting side note about Blanchard: his wife Sophie was also an avid balloonist, a woman ahead of her time. They both died in separate ballooning accidents.

 

Early American Balloonists

The Montgolfier brothers' ballooning adventures led to balloon madness in America. There was much interest in the science of flying balloons as well as how balloons can be used as entertainment.

Philadelphia doctor John Foulke was fascinated with the science of ballooning. He witnessed the Montgolfier brothers’ successful manned hot-air balloon flights in Paris with Benjamin Franklin. Foulke returned to his Philadelphia home and conducted experiments, sending small hot air balloons into the sky. He lectured at the University of Pennsylvania on ballooning, even inviting George Washington to one. Washington could not attend but was keenly interested in hot air balloons and saw their potential for military use. Foulke began raising funds to build America's first hot air balloon but never reached his goal.

While Foulke was lecturing about the science of ballooning and attempting to raise funds, a Bladensburg, Maryland tavern owner and lawyer, Peter Carnes, was ready to send a balloon into the air in June 1784. Carnes was a very ambitious man with an entrepreneurial spirit. He saw American’s enthusiasm for the magnificent flying balloons as a way to make money. Interestingly, Carnes had very little knowledge about how to make a balloon take flight, but against all odds, he built a balloon. His tethered unmanned balloon was sent 70 feet into the air. Carnes set up a more significant event in Baltimore, selling tickets to a balloon-mad city for a manned flight. Unfortunately, Carnes was too heavy for the balloon, but a 13-year-old boy, Edward Warren, volunteered to be the first. Warren ascended into the sky and was brought back safely to the ground, becoming the first American aviator.

Cincinnati watchmaker Richard Clayton saw ballooning as an opportunity to entertain the masses. In 1835, he sold tickets to the launch of his Star of the West balloon. This 50-foot high, hydrogen gas-fueled balloon carried Clayton and his dog. Once a mile above the city, Clayton, wanting to put on the best show for his crowd, threw his dog out of the balloon. The dog parachuted to the ground safely. Clayton’s nine-hour trip took him to present-day West Virginia. This voyage, Clayton’s Ascent, was commemorated on jugs and bandboxes, some of which are part of the Cincinnati Art Museum’s collection. Clayton traveled to many American cities with his balloons and entertained thousands. Clayton used his connections with the press to help bring in the crowds.

Thaddeus Lowe was a New Hampshire-born balloonist and inventor who was primarily self-educated. He began building balloons in the 1850s, traveling the country, giving lectures, and offering rides to paying customers. Lowe believed hot air balloons could be used for communication and was devising a plan to build a balloon that could cross the Atlantic Ocean when the Civil War began.

 

Balloons in the Civil War

President Lincoln was interested in finding out how flying balloons could gather intelligence for military purposes. In June 1861, Lowe was summoned to Washington D.C., where he demonstrated to President Lincoln how a balloon's view from the sky combined with telegraph technology could give the Union Army knowledge of the Confederate troop movements. President Lincoln saw how this could help his army. So, he formed the Union Army Balloon Corps. Thaddeus Lowe was the Corps' Chief Aeronaut. Lowe used a portable hydrogen gas generator that he invented for his seven balloons.

The Peninsula campaign gave Lowe his first chance to show how his balloons could contribute positively to the Union Army. In the spring of 1862, he was able to observe and relay the Confederate Army’s defensive setup during the advance on Richmond. Lowe’s aerial surveillance gave the Union Army the location of artillery and troops during the Fredericksburg campaign in 1862 and the Chancellorsville campaign in 1863.

The Balloon Corps made 3,000 flights during the Civil War. The surveillance obtained from these flights was used for map-making and communicating live reports of battles. The balloon reconnaissance allowed the Union to point their artillery in the correct direction even though they couldn’t see the enemy, which was a first. The Confederates made several attempts to destroy the balloons, but all attempts were unsuccessful. The balloons proved to be a valuable tool in war. 

Unfortunately, Thaddeus Lowe faced significant challenges from Union Army leaders who questioned the cost of his balloons and his administrative skills. Lowe was placed under stricter military command, a difficult situation for him. Ultimately, Lowe resigned from his position in the Balloon Corps, and the use of balloons during battle ceased. Lowe's journey led him back to the private sector, where he eventually settled in Pasadena, California, and continued his inventive pursuits, eventually holding 200 patents.

 

Modern Hot Air Balloons

Hot air balloons lost their popularity as America entered the 20th century. But in the 1950s, Ed Yost set out to revive the hot air balloon industry. Yost is known as the Father of Modern Hot Air Ballooning. He saw the need for the hot air balloon to carry its own fuel, so he pioneered the use of propane to heat the inside of the balloon. Yost also created the teardrop balloon design. He experimented with balloons, including building his own, and made the first modern-day hot-air balloon flight in 1960. Yost was strapped in a chair attached to a plywood board beneath a propane-fueled balloon traveling for an hour and a half in Nebraska. His improvements made hot air balloons safer and semi-maneuverable. Yost crossed the English Channel and attempted to cross the Atlantic Ocean solo. His attempt across the Atlantic failed, but he built a balloon for Ben Abruzzo, Maxie Anderson, and Larry Newman to try again. The Double Eagle II was the first balloon to cross the Atlantic in 1978.

Yost’s achievements and those of many other American hot air balloon enthusiasts helped the sport of hot air ballooning take flight in the second half of the 20th century. Hot air balloon festivals now take place around the country year-round and are major tourist attractions. The Albuquerque International Hot Air Balloon Festival is the world's biggest hot air balloon festival. Hot air balloons have become big business for travelers who want a bird’s eye view of America.

Humans have always wanted to conquer the skies. The curiosity and ingenuity of people like the Montgolfier brothers laid the foundation for Americans to push the boundaries of aviation. The early experiments of scientists and entertainers helped 20th-century inventors and adventurers build safer hot air balloons. Today, there is a vibrant hot-air balloon culture in America. Millions of Americans celebrate the scientific milestones and the sheer joy of flight every year. The history of hot air ballooning shows us the power of imagination and dreams.

 

Angie Grandstaff is a writer who loves to write about history, books, and self-development. 

 

 

References

https://airandspace.si.edu/stories/editorial/presidential-writings-reveal-early-interest-ballooning

https://balloonfiesta.com/Hot-Air-History

https://www.battlefields.org/learn/biographies/thaddeus-sobieski-constantine-lowe

https://fly.historicwings.com/2012/06/the-first-american-aviator/

https://ltaflightmagazine.com/the-first-aerial-crossing-of-the-english-channel/

https://www.mountvernon.org/library/digitalhistory/digital-encyclopedia/article/george-washington-and-ballooning

https://www.nytimes.com/2007/06/04/us/04yost.html

https://www.santafenewmexican.com/news/local_news/ed-yost-father-of-ballooning-subject-of-new-albuquerque-balloon-museum-exhibit/article_917c38b2-6138-11ee-9a3d-4786ca2ea0c6.html

https://www.space.com/16595-montgolfiers-first-balloon-flight.html

https://www.wcpo.com/news/insider/history-richard-clayton-balloon

The naval victory at Midway on June 4, 1942 has rightly been recognized as one of the greatest in the history of the US Navy, and one of the most significant victories in the history of armed conflict. However, events did not follow the plan formulated by US Pacific Fleet commander Admiral Chester Nimitz, and the battle was very nearly lost by Pacific Fleet forces. Conspicuously missing from earlier accounts of the Battle of Midway is a description of the Nimitz plan to confront the Japanese carrier fleet.

Dale Jenkins explains. Dale is author of Diplomats & Admirals: From Failed Negotiations and Tragic Misjudgments to Powerful Leaders and Heroic Deeds, the Untold Story of the Pacific War from Pearl Harbor to Midway. Available here: Amazon US | Amazon UK

Chester Nimitz while Chief of Naval Operations.

The actions taken by Pacific Fleet forces during the Midway battle deviated significantly from the Nimitz plan. But, despite the deviation, the battle was won. What occurred at Midway was essentially a broken play, but positive action from a junior task force commander, astute calculations from an air group commander, and intrepid, skilled flying from carrier pilots saved the day. 

The intelligence team at Pearl Harbor had decrypted sufficient Japanese messages by May 27 to advise Nimitz of expected Japanese fleet movements on June 4. Nimitz’s intelligence staff, headed by LCdr Edwin Layton, informed him that the Japanese carrier fleet, or Striking Force:

“would probably attack on the morning of 4 June, from the northwest on a bearing of 325 degrees. They could be sighted at about 175 miles from Midway at around 0700 (0600 local) time.”(1)

 

Layton expected four or five Japanese carriers steaming from the northwest at 26 knots. There were four: Akagi(flagship of carrier Striking Force commander Vice Admiral Chuichi Nagumo), Kaga, Hiryu and Soryu. Nimitz had a week to plan a defense of the attack, formulate a counter-attack, and continue to assemble forces on Midway to carry out the plan. He issued Operation Order 29-42 that detailed the forces that were to be employed, including the scouting operation of PBY amphibious planes and a picket line of submarines.

Additional Japanese forces included an amphibious Occupation Force operating south of the carrier force, a separate force to attack the Aleutian Islands, and a battleship force trailing 300 miles astern of the carriers. The battleship force included super-battleship Yamato with Combined Fleet commander Admiral Isoroku Yamamoto embarked.

 

The Japanese planned to launch 108 planes, half the total air complement of the four carriers, against the shore defenses of Midway Island at 0430 on June 4 when approximately 220-240 miles from Midway,. The remaining reserve force would be armed with anti-ship bombs and torpedoes to combat any unexpected Pacific Fleet forces. The Japanese command expected the Pacific Fleet carriers to rush to the scene from Pearl Harbor, and the Japanese would destroy them with their carrier planes and battleships in a showdown confrontation.

To counter the Japanese carrier force, Nimitz had planes on Midway Island and three Pacific Fleet carriers, Enterprise, Hornet and Yorktown, under the overall command of Rear Admiral Frank Jack Fletcher. Fletcher, embarked on Yorktown, was in direct command of Task Force 17. A more junior rear admiral, Raymond Spruance, embarked on Enterprise, commanded Task Force 16 of Enterprise and Hornet.  

 

Operating range

A particular problem was the difference in operating ranges between the Japanese carrier planes and those of the Pacific Fleet. The Japanese operating range was 240 miles, and the equivalent for the Pacific Fleet planes was just 175 miles.  That difference meant the Japanese planes could attack the Pacific Fleet carriers when the Pacific Fleet planes were out of range of the Japanese carriers. Nimitz had to have a plan that would get the carriers through the band between 240 miles and 175 miles without being spotted and attacked. The difference, 65 miles, meant that a carrier covering that distance, averaging 25 knots, would have to steam for 2 ½ hours to cross the band where they were vulnerable to attack without being able to return it.

Nimitz designed an attack on the Japanese carrier fleet by moving the Pacific Fleet carriers through the night of June 3-4, under cover of darkness, to arrive at a position where they had the best chance to launch an attack before they were discovered by Japanese scouts. He planned to use PBY amphibious planes from Midway as scouts because the Japanese could sight the PBYs without being alerted to the presence of carriers. The light wind coming out of the southeast meant that the Japanese carriers, steaming into the wind, would launch and recover planes without changing course. Layton based his calculations on the PBYs taking off from Midway at 0430, plus plane and ship speeds, to arrive at his calculation that the PBYs should encounter the Japanese at about 0600, 175 miles from Midway.  The planes on Midway would launch immediately upon receipt of the scouting report.  Japanese carriers would move about 35 miles after the PBY report until the Midway planes intercepted them about 0720, 140 miles from Midway.

Nimitz formulated a plan for a concentration of force of Midway planes and carrier planes. To accomplish this, he determined that the Pacific Fleet carriers were to be at a position 140 miles northeast of the interception point at 0600. That position also was 200 miles directly north of Midway Island and was designated as the navigation reference point for the carrier force. When the report from a PBY was received at approximately 0600 the planes from both Midway and the three carriers would launch their planes. In a successful execution, all the Pacific Fleet planes would arrive over the Japanese carriers at approximately 0720 in a concentration of force. The goal was a victory by 0800-0815.

Because the Japanese planes attacking Midway would not return before 0830, the Pacific Fleet attack would be against just half of the Japanese air defenses. In addition, if the flight decks of the Japanese carriers were heavily damaged, even if the carriers themselves were not sunk, the planes returning from Midway would have to ditch in the ocean.

A graphic of Nimitz’s plan at the Battle. Copyright Dale Jenkins. Printed with permission.

After-action report

The after-action report of Rear Admiral Frank Jack Fletcher confirms the intended movements of the carrier force in conformity with the Nimitz plan:

ENTERPRISE and HORNET maintained their air groups

In readiness as a striking force. During the night of June 3-4

both forces [TF-17 and TF-16] proceeded for a point two

hundred miles North of Midway. (Emphasis added) Reports of enemy forces to the Westward of Midway were received from Midway and Commander-in-Chief, Pacific Fleet. These reports indicated the location of the enemy Occupation Force but not the Striking Force.(2)

 

The ComCruPac (Fletcher) report refers to PBY scouts on June 3, when the Occupation Force was sighted and the carrier Striking Force was still under heavy clouds. It confirms Fletcher’s knowledge of the plan and his intended movements. Further confirmation of the Nimitz plan and the ordered position of the carriers to be 200 miles north of Midway at 0600 on June 4 is contained in published accounts of at least three contemporary historians who had the opportunity to interview participants during and after the war: Richard W. Bates, Samuel Eliot Morison, and E. B. Potter. (3)

On June 3 the PBYs took off from Midway at 0430 and contacted the Japanese occupation force. This contact confirmed that the Japanese were proceeding with the plan as previously decrypted by Layton’s intelligence unit. The carrier force was still under a heavy weather overcast and was not discovered on June 3.

On June 4 the PBYs launched again at 0430. At 0534 a sighting of enemy carriers was transmitted to Admirals Fletcher and Spruance, and to the forces on Midway. At 0603 the earlier report was amplified:

“2 carriers and battleships bearing 320 degrees, distance 180, course 135, speed 25 knots.” (4)

 

Immediately after receiving the latter report the planes on Midway took to the air.  Fighters rose to defend Midway, and six Avenger torpedo planes and four B-26s fitted with torpedoes flew to attack the Japanese carriers.  Two more carriers were in the Japanese formation but were not seen by the PBY pilot.

However, Pacific Fleet carriers were not in position to launch planes at 0603 because Fletcher, while heading southwest overnight June 3-4 toward the designated position 200 miles north of Midway, decided that the scouting as ordered in Operation Order 29-42 might not be sufficient. At first light, he ordered Yorktown carrier planes to conduct a separate sweep to the north and east. To do this the carriers had to change course to the southeast to launch planes into the wind, and to be on that course to recover the planes. These course changes took the carriers away from the interception point. When the 0603 message from the scout arrived, the carriers were 200 miles east and north of the interception point and 25 miles beyond their operating range of 175 miles. 

At 0607 Fletcher sent a message to Spruance:

 

“Proceed southwesterly and attack enemy carriers when definitely located. I will follow as soon as planes recovered.”(5)

 

Spruance, detached with Enterprise and Hornet, proceeded southwest at all possible speed to close the range, but at an average speed of 25 knots it would take an hour to cover 25 miles.  Meanwhile, the planes from Midway arrived separately over the Japanese carriers and attacked. The plan for a concentration of force had failed.

The Avengers and B-26s, arriving at 0710, flew into the teeth of the Zero fighter defenders. They attempted valiant torpedo runs against two of the four carriers, but the inexperienced pilots were hopelessly outclassed by the fast, agile and deadly Zeros. There were no hits or even good chances for hits, and the Zeros sent five of the six Avengers flaming into the ocean. The B-26s hardly did better, but one pilot, with his plane on fire and probably knowing he was never getting home, dove at the bridge of the Japanese flagship. He missed by a few feet and crashed into the ocean.

 

B-26 pilot

The B-26 pilot may have done as much as anyone that day to turn the tide of the battle.  At 0715 the shocked Admiral Nagumo, already notified that the Midway attack had run into heavy resistance, decided that a second attack on Midway was required. He ordered the armaments of the standby force to be changed from anti-ship bombs and torpedoes to point detonating bombs for land targets.  All of this would require over an hour to complete, and not before the Midway attack force would be returning to land about 0830, low on fuel.

Admiral Spruance, ready to launch planes from his two carriers at 0700, plotted courses to a new interception.  Ranging closely together between 231 degrees and 240 degrees, but delayed at the launch, the planes expected to arrive at the new interception at 0925 – almost 2 1/2 hours after the launch time.

At 0917, with the Midway force landed, Admiral Nagumo turned northeast to confront the Pacific Fleet carriers that a Japanese scout had discovered earlier. Decisions he had made, including landing the Midway planes, had delayed any attack on the American carriers. The Americans were still making attacks, but the Zeros swept them aside easily. Now Nagumo was supremely confident. Rearming and refueling the entire air complement on all four carriers would be completed by 1045. They would launch a massive, coordinated attack of over 200 planes and sink the American carrier fleet.

The Enterprise and Hornet planes crossed the revised intercept point at 0925 but found nothing but open ocean. The Hornet air group commander took his squadrons southeast to protect Midway.  The Enterprise air commander realized that the Japanese carrier force probably had been delayed by earlier actions.  He took two squadrons of dive bombers on a northwest course to retrace the Japanese movements, then began a box search that came upon a Japanese destroyer, and that led to the Japanese carriers.  Diving out of the sun at 1025 caught the Japanese defenders by surprise, and in five minutes Akagi and Kaga were destroyed. The Yorktown planes suddenly appeared and destroyed Soryu.

Hiryu, the remaining Japanese carrier, launched dive bomber and torpedo plane attacks which led to the loss of Yorktown. Later in the day on June 4 Enterprise dive bombers destroyed Hiryu. The greatest victory of the US Navy had been realized.

 

Aftermath

In the aftermath of the Midway victory no one was going to complain about not following the Nimitz battle plan, least of all Admiral Nimitz.  Consequently, the existence of the plan has been overlooked until now. Whether following the plan would have resulted in the same victory by Pacific Fleet forces, or the same victory without as many losses in ships, planes and personnel, has never been explored and is left to speculation.

 

As a reminder, Dale is author of Diplomats & Admirals: From Failed Negotiations and Tragic Misjudgments to Powerful Leaders and Heroic Deeds, the Untold Story of the Pacific War from Pearl Harbor to Midway. Available here: Amazon US | Amazon UK

 

 

References

(1) Layton, Edwin T., And I Was There, Konecky & Konecky, Old Saybrook, CT, 1985, p. 430

(2) Report of Commander Cruisers, Pacific Fleet (Adm. Fletcher), To: Commander-in-Chief,

United States Pacific Fleet, Subject: Battle of Midway, 14 June 1942, Pearl Harbor, T.H., Para. 3, included as Enclosure (H) in United States Pacific Fleet, Advance Report – Battle of Midway, 15 June 1942

(3) Bates, Richard W., The Battle of Midway, U.S. Naval War College, 1948, p. 108; Morison, Samuel Eliot, Coral Sea, Midway, and Submarine Actions, Naval Institute Press, 1949, p. 102; Potter, E.B., Nimitz, Naval Institute Press, 1976, p. 87.

(4) Morison, p. 103.

(5) Morison, p.113

Unlike many other Poles who took part in the Civil War on the Union side, Count Adam Gurowski was not a soldier or a commander, and his actions had no influence on the shape of the Civil War. He was primarily a publicist whose sharp views on the actions of Abraham Lincoln's government were so violent and uncompromising that the US president even treated him as a potential assassin. Rafal Guminski explains.

Adam Gurowski.

Count Adam Gurowski: History and Political Activity in Europe

Adam Gurowski was born on September 10, 1805, into a family of noble origins and a count's title. He was the oldest of seven siblings. His sister, Cecilia, was married to Baron Frederiks, general adjutant of Tsar Nicholas I, and his brother, Ignacy, married the Spanish Infanta Isabella de Borbón, daughter of the Duke of Cadiz, and became a Spanish grandee. As the oldest son, he received a good education. After completing his education at the provincial school, he began his studies in Berlin, Leipzig, Göttingen, and Heidelberg. He studied law, philosophy, history, and classical philology.

After his studies, Gurowski returned to the Kingdom of Poland and joined a political party from the western part of the country, which sought to maintain the status quo and preserve the autonomy of the Kingdom of Poland. The count quickly left the organization, and in January 1829 he was supposed to take part in preparations for the so-called coronation plot, the aim of which was the death of the Russian Tsar Nicholas I. After the outbreak of the November Uprising, Gurowski became involved in organizing the insurgent administration and civil authorities, which, however, ended in failure. The count became a staunch critic of the insurgent dictatorship, and after its fall, he became a member of the Patriotic Society, on behalf of which he demanded the dethronement of Tsar Nicholas I as the King of Poland.

Despite being blind in one eye, he joined the insurgents as an ordinary soldier and took part in battles, for which he was promoted to officer and received the Silver Cross of Virtuti Militari. After leaving the army, he became an envoy of the Patriotic Society to Paris, where in French magazines such as Trubine, François, National, Reformateur, La Révolution de 1831 and Le Globe, he undertook to criticize the authorities of the November Uprising. After the fall of the Uprising, Gurowski struggled with the instability of his political views and a tendency to sharp disputes, through which he quickly alienated people from his closest surroundings.

The year 1834 was special for the Pole because of the radical change in his views and ideas. His statements began to include comments of a pan-Slavic nature with Poland as the unifier of the Slavic world. He also viewed the Polish emigration differently, whose activities for the liberation of the country he had previously assessed negatively. The change in the count's views is best seen in his interest in the postulates of French utopian socialism. The changes in Gurowski's worldview reached even such basic assumptions as nation and patriotism.

The count's new views conflicted him with his family and Polish patriotic circles, but it was only the request for amnesty addressed to Tsar Nicholas I and the recognition of Russia as the country that was to lead the unification of Slavic nations that made Gurowski a national apostate. His stay in Russia turned out to be difficult. The state apparatus of the Tsarist regime forced him to reassess his views once again, and the complete isolation from his family and countrymen began to weigh heavily on him.

 

A Polish Count on American Soil

In 1840, Gurowski returned to the Kingdom of Poland to sort out his property and family affairs. The attempt to recover his confiscated property ended in failure. Finding himself in a hopeless situation, the count decided to emigrate. In April 1844, he left the border of the Kingdom of Poland forever and went to the West. For some time, he lived in Bavaria, Hesse, and then in Belgium, Switzerland, and Italy. Unable to settle down permanently, the Pole decided to leave the Old Continent and emigrate to the United States of America. On December 2, 1849, Count Gurowski found himself in New York.

The Pole's situation in America was quite stable at first. He had brought a supply of cash with him from Europe, and thanks to letters of recommendation, he had access to intellectual circles from the very beginning. After half a year, the count's financial situation began to deteriorate, which forced him to seek a source of support outside New York. In Boston, he was even offered a chance to lecture on law at Harvard University, but due to poor attendance, his lectures were quickly suspended. During this time, the Pole became keenly interested in the issue of slavery and took an active part in the life of the local intellectual social elite. He managed to get to know the leaders of American literature and poetry: Henry W. Longfellow and James R. Lowell, who, together with Gurowski, had in common a particular aversion to slavery and criticism of that institution.

Eventually, the Pole returned to New York and in 1852 took a job at the New York Daily Tribune. He wrote a column on European affairs, criticizing the rule of Tsar Nicholas I. Despite his continued interest in European affairs, the Pole was fascinated by his new homeland, which he admired in many ways. He traveled extensively in the northern and southern states, and published his observations in “America and Europe”, which was warmly received by critics and praised for its impartiality and insightful observations. The Pole was greatly impressed by his new homeland and in many ways recognized its superiority over European countries. He paid special attention to the unique relationship between power and freedom. In his opinion, in Europe, these two forces competed with each other, while in America, they cooperated for the common good and development. The count was equally impressed by the class structure of American society. In his opinion, the superiority of the American system was the lack of class division dominated by the aristocracy. He noted with admiration that the law was created on the initiative of the people and for the people, and not by a privileged ruling group.

Gurowski's relations with the New York Daily Tribune began to deteriorate significantly, and as a result, the count lost his job. From then on, for four years he supported himself by publishing articles in various magazines. During this time, he continued to write a book on the history of world slavery, which was published in 1860 under the title “Slavery in History”.

 

Abraham Lincoln under harsh criticism from Adam Gurowski

The Pole, who was increasingly vocal in his criticism of slavery, decided to move to the US capital, Washington, where he hoped for greater understanding of his views. He wanted to seek support from politicians from the radical wing of the Republican Party. Thanks to his work in the New York Daily Tribune and his authorship of the books: “America and Europe” and “Slavery in History”, the Pole was already a well-known person in Washington. He quickly established important acquaintances, including Salmon P. Chase, the future chief justice of the United States, and John A. Andrew, Governor of Massachusetts. After the outbreak of the Civil War, he joined a volunteer unit under the command of Cassius M. Clay, which was to protect and patrol the capital. After the threat had passed, the Pole got a job at the State Department. His duties included reading the European press and preparing reports on articles of interest to the department. However, Gurowski lost his job after his diary, in which he criticized the government, the president, and the Union generals, fell into the wrong hands. Ultimately, he published the contents of the diary in December 1862. Thus began his crusade against Abraham Lincoln.

Adam Gurowski should be considered the most ardent critic of the federal government and the president at the time. Although the Pole spoke positively about Lincoln's inaugural address, the government's lack of decisive action in the event of the attack on Fort Sumter and the riots in Baltimore ultimately confirmed his dislike of Abraham Lincoln. Gurowski stated that the current Union government "lacked the blood" to defeat the Confederacy, and calling up 75,000 volunteers was definitely not enough to defeat the Confederacy. He also believed that the situation overwhelmed Abraham Lincoln, who had no leadership skills and could not compare to George Washington or Andrew Jackson. He considered the president's greatest flaw to be his lack of decisiveness, and he saw it as the cause of the Army of the Potomac's defeats. Gurowski also criticized Lincoln's personnel decisions, especially the delay in dismissing General George McClellan from the position of commander of the Army of the Potomac. However, Gurowski was able to appreciate Lincoln. He praised the president's behavior after the defeat at Chancellorsville. The count accused Lincoln of manipulating election promises and making military decisions through the prism of politics, which was to result in the deaths of many soldiers. However, in the face of the president's re-election, Gurowski showed a shadow of support for him, fearing for the election of the hated McClellan and his pro-slavery lobby.

There is no doubt that Gurowski's criticism of the president was often exaggerated, but in some aspects the Pole's opinion coincides with the contemporary opinion of historians. The count's attitude towards the president was dictated by his views and difficult, uncompromising personality. The Pole's most positive opinion of Lincoln was expressed after the president's death. In Gurowski's eyes, the murdered president became a martyr close to sainthood, who will go down in world history as a great and noble man.

 

The site has been offering a wide variety of high-quality, free history content since 2012. If you’d like to say ‘thank you’ and help us with site running costs, please consider donating here.

 

 

References

·       Carter R., Gurowski, „The Atlantic Monthly” 1866, t. 18, nr 109.

·       Derengowski P., Polacy w wojnie secesyjnej 1861-1865, Napoleon V, Oświęcim 2015.

·       Fisher L.H., Lincoln’s Gadfly, Adam Gurowski, University of Oklahoma Press, Norman 1964.

·       Garewicz J., Gracz. Rzecz o Adamie Gurowskim [1805-1866], „Res Publica”, 2 (1988), nr 5,

·       Głębocki H., „Diabeł Asmodeusz” w niebieskich binoklach i kraj przyszłości: hr. Adam Gurowski i Rosja, Arcana, Kraków 2012.

·       Łukasiewicz W., Gurowski AdamPolski słownik biograficzny, V.  9, Wrocław 1960-1961.

·       Stasik F., Adam Gurowski 1805-1866, Wydawnictwo Naukowe PWN ,Warszawa 1977.

Posted
AuthorGeorge Levrier-Jones

Because she played her cards right, Anne of Cleves, as the fourth wife of King Henry VIII of England, managed to escape the wrath he inflicted on two of his previous wives and lived a privileged life on good terms with the king after their separation.

C. M. Schmidlkofer explains.

Anne of Cleves. Painitng by Barthel Bruyn the Younger.

It seems unfair that Anne of Cleves, the fourth wife of King Henry VIII, is known throughout history as the “ugly” wife (out of the six total he had) when in reality, it was her wit and intellect that makes her remarkable.

Born in Dusseldorf in 1515, Anne of Cleves was the daughter of Maria of Julich-Berg and Johan III, Duke of Cleves. Her marriage to Henry in Jan. 6, 1540, right from the start was fraught with disappointment and misunderstanding.

First, at the tender age of 24, she was invited to become Henry’s fourth bride based on a painting the king commissioned of her countenance which he later said looked nothing like her. But that came a bit later.

The marriage was a political arrangement fostered by Henry’s “fixer,” Chief Minister Thomas Cromwell who sought to temper the power plays of Spain and France while boosting Protestant influence with the union.

 

First meeting

The first meeting between the king and his bride was a massive fail, as Anne rejected Henry’s surprise meeting wearing a disguise and the relationship went downhill from there.

The complaints began in earnest then as the king complained she did not look like the commissioned portrait.

He called her a “Flanders Mare,” said she smelled, and reportedly refused to have marital sex with her.

Anne was a fish out of water in Tudor Court. Her upbringing did not include dancing and music, the heart of Tudor life, but was focused on learning duties of a noblewoman she was expected to become along with household skills.

In an attempt to integrate herself into life with Henry, perhaps nervous over what lay ahead, she had the foresight to socialize with her English travelers to learn customs and social skills as well as learning the king’s favorite card games during her voyage to meet him.

There is little known about Anne’s feelings about the marriage but she was keenly aware that two of Henry’s first three wives were either banished or beheaded and that the purpose of any union was to produce a male heir for the king.

 

And although Henry had his coveted son through his third wife, Jane Seymour, who died shortly after giving birth, he was forging ahead with the fourth marriage to secure another.

 

End of marriage

Seven months after his marriage to Anne, who served as queen consort, Henry notified his bride their marriage was to be annulled three days hence. His reasoning was the marriage was never consummated and for good measure threw in questions about Anne’s relationship years ago with her brief engagement to Francis the Duke of Bar in 1527.

Wisely, Anne knew that arguing or pleading to continue the marriage would not be successful and instead fully cooperated with the king’s wishes. Certainly, she had nothing to lose and as it turned out she gained beautifully.

Henry, possibly relieved over Anne’s cooperation, awarded her with a generous settlement, granted her the title of “the King’s Sister” as long as she remained in England and bestowed upon her large tracts of properties, such as Hever Castle – the former childhood home of Henry’s second wife, Anne Boleyn, whom he had beheaded in 1536.

Unlike Henry’s first wife, Catherine of Aragon, who resisted the king’s demand for annulment on religious grounds, ending up banished from court until her death in 1536, Anne was allowed to keep her jewels, her metal plate and her dresses, and received a generous annual stipend along with revenue from other properties.

She willingly turned over her wedding ring to Henry, asking that it be destroyed “as a thing which she knew of no force or value.”

Henry seemed to value Anne’s counsel after their separation and continued a cordial relationship with her until he died in 1547.

 

Later years

At that point Anne lost her title of the “King’s Sister” and she moved away from court, leading a quiet life until Mary I, Henry’s daughter with his first wife, Catherine of Aragon, and Anne’s stepdaughter, took the throne in 1553. Anne briefly came under suspicion when a plot to depose the queen and place Elizabeth I on the throne was investigated because Anne also had a close relationship with Elizabeth I, the daughter of the king and Anne Boleyn.

She escaped a charge of treason and remained cordial with Mary I until her death in 1557 at the age of 41 after a brief illness in Chelsea Old Manor, her home and former home of Catherine Parr, Henry’s sixth and last wife.

 

The site has been offering a wide variety of high-quality, free history content since 2012. If you’d like to say ‘thank you’ and help us with site running costs, please consider donating here.

 

 

References

https://www.thirteen.org/wnet/sixwives/meet/ac_handbook_children.html

https://www.theanneboleynfiles.com/the-death-of-anne-of-cleves/

https://www.historytools.org/stories/anne-of-cleves-the-unwanted-queen-who-survived-and-thrived

https://www.english-heritage.org.uk/visit/inspire-me/blog/blog-posts/henry-viii-and-anne-of-cleves/

https://www.britannica.com/biography/Anne-of-Cleves-queen-of-England