• My series on making a Newtonian telescope
  • How Leon Foucault Made Telescopes

Guy's Math & Astro Blog

Guy's Math & Astro Blog

Category Archives: Math

Variable star measurements with a Seestar

02 Tuesday Dec 2025

Posted by gfbrandenburg in astronomy, Math, Optics, science

≈ Leave a comment

Tags

astronomy, Math, Optics, RRLyrae, Seestar, space, Telescope, variable stars

Most (but not all) of the variable stars I tried over the past month or so were simply too bright for this sensor. The target stars were saturated (ie some of the pixels’ electron wells simply  overflowed) despite using the shortest available exposure, adding the light pollution filter and refocusing. Seestar won’t let your change the ISO nor open the shutter for less than 10 seconds.

I did get some believable light curves on BE Lyncis (aka HD67390)and U Cephii (aka HD 5679). I attack some graphs I made.

I used some black plastic I had,and my set of Forster bits, to make holes of sizes 1”, 1-1/8”, 1-1/4”, and 1-1/2”, in case I want to try brighter variable stars again like RR Lyrae. 

I very impressed that Seestar absolutely nails the locations of every single one of these targets! I’m also pleased that AstroImageJ allows quick and easy plate-solving! 

Yes, Defocus!

18 Tuesday Nov 2025

Posted by gfbrandenburg in astronomy, astrophysics, History, Hopewell Observatorry, Math, Optics, Uncategorized

≈ Leave a comment

Tags

Hopewell Observatory, light curve, Math, photometry, saturation, Seestar, Telescope, variable star

This graph gives me confidence that defocusing will solve my overflow problem. It’s a profile of the number of photons/electrons captured (vertical axis) versus the distance from what I thought was the exact center of the star RR Lyrae aka HD 182989.

(It is amazing how fast the computer works this out! I’m used to my middle school or high school students working things out like this by hand at first — it’s a very slow and tedious process! Let us give a tip of the hat to Williamina Fleming, who was the first person to notice and record that RR Lyrae was a variable star. She did so by examining glass plates on which were little dark spots made by stars’ light striking particles of suspended silver nitrate, without a blink comparator! Wow!)

Notice that there is one

If I defocus the camera a bit, that saturated value would get spread out over an airy disk that might look like this:

Are we alone – 2?

14 Tuesday Oct 2025

Posted by gfbrandenburg in astronomy, astrophysics, History, Math, nature, science, Uncategorized

≈ Leave a comment

Tags

aliens, civilization, evolution, exoplanets, life, philosophy, science, science-fiction, space travel, Speed of light

Someone else’s take on this topic.

If aliens could travel at a fraction of the speed of light, why haven’t they colonized our Galaxy by now?

We’ve all been brainwashed by years of Star Wars, Star Trek, Marvel Universe, Avatar, etc, to think that space should be teeming with intelligent civilizations, most of them vaguely like ourselves, working with and against each other to carve up the galaxy. As a result, it’s easy to overlook the huge assumptions embedded in your question.

  • Habitable worlds exist. Do they? It seems overwhelmingly likely, given that there are probably a trillion planets in the Milky Way alone, but for now we don’t know. Perhaps there are many near-miss planets like Venus and Mars, but extremely few true Earth analogs. For instance, life might require a particular rock/ice ratio, a large moon, and a specific style of plate tectonics. That level of specificity seems unlikely to me, but that’s just my random opinion. Until we find another planet with truly Earthlike conditions, we cannot say for sure that this is true.
  • Alien life exists. Does it? Honestly, we have no idea. There are many strong arguments suggesting that the fundamental biochemistry of self-replication is practically inevitable given the right conditions. But we don’t know how common those conditions are (see above), and even then we don’t know if there is some extremely low-probability gap that hinders the emergence of even simple microbial life.
  • Intelligent life exists. Does it? This one is a complete unknown. Keep in mind that there was no intelligent, self-aware life on Earth for 99.999% of its existence. Maybe the emergence of intelligence here was a rare fluke, unlikely to be reproduced anywhere else. Rat-level intelligence seems to have existed for at least 200 million years without any indication that higher level intelligence would confer a big evolutionary advantage. (There are all kinds of speculations about why intelligent life could not emerge until now on Earth, but these are just-so stories, trying to paint an explanation on top of a truth that we already know.)
  • Intelligent species want to “colonize” the galaxy. Do they? Life does have a tendency to explore every available ecological niche, and humans sure do like to spread out. From our example of one Earth, it seems likely that this is a general tendency of life everywhere, but we are doing an awful lot of extrapolating here. Maybe other types of intelligence have other motivations that have nothing to do with expansion.
  • Intelligent species become technological species. Do they? It’s certainly true for humans, but dolphins have a high level of intelligence and they are not trying to build spaceships. Crows, chimps, and bonobos are also capable of simple tool use, but they don’t appear to have experienced any evolutionary pressure to become true technological species.
  • Technological species can travel a significant fraction of the speed of light. (I assume you mean something like more than 1% of light speed.) Can they? Extrapolating from human technology, that seems extremely likely. Then again, the fastest spacecraft we have ever built would take about 300,000 years to reach the next star. Nobody is going to be colonizing the galaxy at that rate. You have to accept that speculative but unproven technologies are both feasible and practical for more advanced technological civilizations. Maybe intelligent life is out there, but in isolated pockets.
  • Intelligent, technological, space-faring species survive for a long time. Do they? Oh boy, we have no idea at all if this is true. Earth is 4.5 billion years old. Life has been around 4 billion years. Land species have been around 400 million years. Rat-level intelligence has maybe been around 200 million years. Our species has been around for about 100 thousand years. We have been capable of spaceflight for less than 100 years. It may seem inconceivable that humans could go extinct—but even if we last another 100,000 years, that may not be nearly enough time to spread across the galaxy, even if we develop the means to do it and maintain the will to do it. If intelligent species typically last less than 100,000 years, thousands of them could have come and gone in our galaxy without us ever knowing.

So there’s not one answer, but a whole set of overlapping possible answers why we don’t see evidence of any alien civilizations around us. And that doesn’t even consider more exotic possibilities, such as the idea that they might be here but just undetectable to us or deliberately hidden from our primitive eyes.

Success with digital measurement of parabolic telescope mirrors

02 Saturday Nov 2024

Posted by gfbrandenburg in astronomy, astrophysics, Math, Optics, science, Telescope Making

≈ Leave a comment

Tags

astronomy, ATM, data, FigureXP, foucault, measurement, millies-lacroix, paraboloid, Telescope

Alan Tarica, Pratik Tambe, Tom Crone and I have been pulling our hair out for a couple of years, trying to use cameras and software to measure the ‘figure’ of the telescope mirrors that we and others produce in our telescope-making class.

There has been progress, and there has been frustration.

I think we finally succeeded!

Some of the difficulties have been described in previous posts. In brief, we want our mirrors to be really, really close to a perfect paraboloid. There are many ways of doing those measurements and seeing whether one is close enough, but none of those methods are easy!

(By the way, one needs the entire mirror to be within one-tenth of a wave-length of green light of that ideal paraboloid! That’s extremely tiny, and equivalent to the thickness of a pencil over a ten-mile diameter!)

I think I can finally report a victory. My evidence is this graph that I made just now, using data that Alan and I gathered last night with our setup, which consists of a surveillance camera coupled to an old 35mm SLR film camera lens, which is mounted on a linear actuator screw connected to a stepper motor controlled by an Arduino and a Python app developed by Pratik.

Something seemed to be always a bit — or a lot — ‘off’.

Until today, when I converted everything to millimeters and used the criterion set out by Adrien Millies-Lacroix in an article he wrote in Sky & Telescope back in 1976.

The blue dots just above the x-axis are the measurements for this one particular mirror with a diameter of 8″ and a radius of curvature of 77 inches.

The dotted blue curve in the middle of the image is the best-fit parabola for those dots. Notice that the R-squared value (variance) for that curve is not great: 0.3599.

But that variance isn’t important. What is important is the green and orange blobs and curves above and below the blue ones.

The green and orange curves are the upper and lower allowable limits for the measurements of this particular mirror, using the

Clearly, the blue dots are all well within the green and orange curves.

Which means that this mirror is sufficiently parabolized.

The fact that the blue dots don’t fit the dotted line perfectly, and behave pretty oddly at positive or negative 80 millimeters, both agree with the fact that we can see on the photos that the surface of this mirror is rather rough, as you can see in the images below. Note also that the image labeled ‘Step 6’ found not one, but two null zones on the right, indicated by two vertical blue lines.

So, finally, we have an algorithm that gives good measurements! What I still want to do is to automate all the spreadsheet calculations that I just did today. Perhaps we can upload them to something like FigureXP by Dave Rowe and James Lerch.

Thanks very much to all those who have helped, whom I should look up and name here.

Caveat: This method can give really ridiculous measurements close to the center and close to the edge.

PS: if anybody wants the raw data, just email me at gfbrandenburg at gmail dot com.

An apparent paradox

15 Tuesday Aug 2023

Posted by gfbrandenburg in Math, teaching, Uncategorized

≈ 1 Comment

Tags

conundrum, daughters, paradox, probability

Some probability situations are quite confusing!

There is a pretty well-known paradox which goes something like this:

You hear that the Smith and Jones families each have two children.

You are told that the older Smith child is a girl, and that at least one of the Jones children is a girl.

Assuming that boys and girls are equally likely to be born (I know this is not quite true, but let’s pretend) in any given pregnancy, what are the chances that the Smiths have two girls? How about the Joneses?

Most people would say that those probabilities are equal: 50% in both cases.

But they are not. In fact, it is much less likely that the Joneses have two daughters!

Here is why:

In any family with two children, there is an older sibling and a younger one.

In the Smith family, you know that the older child is a girl, but you know nothing about their younger child, so the younger one is equally likely to be male or female. So the chances that the Smiths have two daughters is indeed 1/2, or 50%.

In the Jones family, all we know is that there is at least one girl. Let’s look at a diagram that shows all of the equally-likely possibilities in any family with two children:

With the Smith family, we can rule out cases 1 and 2, leaving us cases 3 and 4.

However, with the Jones family, we can only rule out case number 1. Cases 2, 3, and 4 — which are all equally likely — are all possible outcomes for the Joneses. Notice that only in case number 4 do the Jones have two daughters. So with the Joneses, the chances that there are two daughters is only 1 in 3, or 33.3%.

And that result is a lot lower than 50%!

Weird, right?

A Navigation / Geometry Problem

24 Sunday Oct 2021

Posted by gfbrandenburg in astronomy, History, Math, science, teaching

≈ Leave a comment

Tags

Benjamin Banneker, District of Columbia, Pierre l"Enfant, Washington

I had the pleasure of helping lead a field trip for 9th grade Geometry students at School Without Walls SHS that we call ‘Math on the Mall’ assisting with two colleagues from the SWW math faculty.

One of our goals is for the students to see how beautifully and geometrically this city was laid out by Pierre l’Enfant, Andrew Ellicott, and Benjamin Banneker about 230 years ago.

While there are lots of myths written and repeated about Banneker’s actual contribution, the fact is that he was the astronomer, who was responsible for determining due north, exactly, and the exact latitude and longitude of the southern tip of the original 10-mile-square piece of land. With no Internet or SatNav or even a telegraph or steam engine, but with a very nice refractor and highly accurate clock that he was entrusted with, but with no landmarks to measure from, he was able to do so, in 1790.

I was sad to see that exactly none of the students know which way was north – in a city where the numbered streets near the Mall and the rest of DC’s historic downtown were almost all laid out perfectly north-south, and the streets whose names begin with letters or words like ‘Newark’, and the streets along the Mall, are all laid out perfectly east-west. Very few of them had ever seen the Milky Way, though most had heard of Polaris or the North star.

Hopefully they will remember that in the future as they do more navigation on their own in this great city.

I challenged them to try to figure out why the angle of elevation of the North Star is the same as their latitude. Here is a diagram illustrating the problem:

The Earth, Polaris, and You.

This diagram is intended to help you understand why the North Star’s elevation above your horizon always gives you your  latitude (if you live north of the Equator.

The big circle represents the Earth. The center of the earth is at E. The equator is AD.

YOU, the observer, are standing outside on a clear night. You see Polaris in the direction of ray BG. Line HCE is the Earth’s axis, and it also points at Polaris – which is so far away, and seems so tiny, but yet is also so large, that yes, parallel rays BG and CH do, for all practical purposes, point at the same point in the sky. Ray ED starts at the center of the Earth, passes through you at B, and goes on to the zenith (the part of the sky that is directly overhead). The horizon (BF) and the zenith (ray EB) are perpendicular. Also, line HCE (the earth’s axis) is perpendicular to its equator (segment AED).

Using some sort of angle measuring device, if you are out on the National Mall at night, you can very carefully measure the angle of elevation of the North Star above the local horizon, and you should ideally find that angle, FBG, is about 38.9 degrees, but we could also call it X degrees.

Prove (i.e. explain) why your latitude (which is angle AEB) measures the same as angle FBG.

What are the givens?

=========================================================

Full disclosure: My daughter graduated from SWW two decades ago, and I taught there as well for a year and for 10 years at a school that is now associated with it: Francis (then JHS now a middle school).

The kids were nice back then, and they still are. I thought the teachers did a great job.

This is a DC public high school that you have to apply to.

Benjamin Banneker was an amazing person. There are a lot of myths that have been attached to his work and accomplishments, which I am guessing might be because those people didn’t actually understand the math and astronomy that he did accomplish. The best book on him is by Silvio Bedini.

‘Math on the Mall’ was originated by Florence Fasanelli, Richard Thorington, and V. Frederick Rickey around 1990. I participated as a math teacher in a couple of those tours led by FF. Later, I wanted to take my students on a similar tour that would include a trip to see a number of the works of the geometer and artist Maurice C. Escher, and couldn’t find my copy of their work, so I made up my own, and added to it using the work of FF, RT, and VFR and suggestions from teachers and students. Later on, the Mathematical Association of America made something similar, which you can find here.

My version was on the website of the Carnegie Institution for Science for a number of years. See page 56 on this link. I need to find someone to cut out some of my excess verbiage and then trot it out to a publisher.

A piece of mystery glass

29 Sunday Aug 2021

Posted by gfbrandenburg in astronomy, Hopewell Observatorry, Math, Optics, Telescope Making

≈ 3 Comments

Tags

ATM, barium, Bausch & Lomb, Bureau of Standards, flint, glass, Hopewell Observatory, Math, mystery glass, Optics, Schott, Snell's Law, Telescope

Many years ago, the late Bob Bolster, a founding member of Hopewell Observatory and an amazing amateur telescope maker, got hold of a large piece of glass, perhaps World War Two military surplus left over from the old Bureau of Standards.

I have no idea what it is made out of. If Bob had any clue about its composition, he didn’t tell anyone.

Its diameter is 22 inches, and its thickness is about 3.25″. It has a yellowish tint, and it is very, very heavy.

If you didn’t know, telescope lenses (just like binocular or camera lenses) are made from a wide variety of ingredients, carefully selected to refract the various colors of light just so. Almost all glass contains quartz (SiO2), but they can also contain limestone (CaCO3), Boric oxide (B2O3), phosphates, fluorides, lead oxide, and even rare earth elements like lanthanum or thorium. This link will tell you more than you need to know.

If you are making lenses for a large refracting telescope, you need to have two very different types of glass, and you need to know their indices of refraction very precisely, so that you can calculate the the exact curvatures needed so that the color distortions produced by one lens will be mostly canceled out by the other piece(s) of glass. This is not simple! The largest working refractor today is the Yerkes, with a diameter of 40 inches (~1 meter). By comparison, the largest reflecting telescope made with a single piece of glass today is the Subaru on Mauna Kea, with a diameter of 8.2 meters (323 inches).

For a reflecting telescope, one generally doesn’t care very much what the exact composition of the glass might be, as long as it doesn’t expand and contract too much when the temperature rises or falls.

We weren’t quite sure what to do with this heavy disk, but we figured that before either grinding it into a mirror or selling it, we should try to figure out what type of glass it might be.

Several companies that produce optical glass publish catalogs that list all sorts of data, including density and indices of refraction and dispersion.

Some of us Hopewell members used a bathroom scale and tape measures to measure the density. We found that it weighed about 130 pounds. The diameter is 22 inches (55.9 cm) and the thickness is 3 and a quarter inches (8.26 cm). Using the formula for a cylinder, namely V = pi*r2*h, the volume is about 1235 cubic inches or 20,722 cubic centimeters. Using a bathroom scale, we got its weight to be about 130 lbs, or 59 kg (both +/- 1 or 2). It is possible that the scale got confused, since it expects two feet to be placed on it, rather than one large disk of glass.

However, if our measurements are correct, its density is about 2.91 grams per cc, or 1.68 ounces per cubic inches. (We figured that the density might be as low as 2.80 or as high as 3.00 if the scale was a bit off.)

It turns out that there are lots of different types of glass in that range.

Looking through the Schott catalog I saw the following types of glass with densities in that range, but I may have missed a few.

2.86  N-SF5

2.86 M-BAK2

2.89 N-BAF4

2.90 N-SF8

2.90 P-SF8

2.91 N-PSK3

2.92 N-SF15

2.93 P-SF69

2.94 LLF1

2.97 P-SK58A

3.00 N-KZFS5

3.01 P-SK57Q1

By comparison, some of the commonest and cheapest optical glasses are BAK-4 with density 3.05 and BK-7 with density 2.5.

Someone suggested that the glass might contain radioactive thorium. I don’t have a working Geiger counter, but used an iPhone app called GammaPix and it reported no gamma-ray radioactivity at all, and I also found that none of the glasses listed above (as manufactured today by Schott) contain any Uranium, Thorium or Lanthanum (which is used to replace thorium).

So I then rigged up a fixed laser pointer to measure its index of refraction using Snell’s Law, which says

Here is a schematic of my setup:


The fixed angle a I found to be between 50 and 51 degrees by putting my rig on a large mirror and measuring the angle of reflection with a carpentry tool.

And here is what it looked like in practice:

I slid the jig back and forth until I could make it so that the refracted laser beam just barely hit the bottom edge of the glass blank.

I marked where the laser is impinging upon the glass, and I measured the distance d from that spot to the top edge of the glass.

I divided d by the thickness of the glass, in the same units, and found the arc-tangent of that ratio; that is the measure, b, of the angle of refraction.

One generally uses 1.00 for the index of refraction of air (n1). I am calling n2 the index of refraction of the glass. I had never actually done this experiment before; I had only read about doing it.

As you might expect, with such a crude setup, I got a range of answers for the thickness of the glass, and for the distance d. Even angle a was uncertain: somewhere around 49 or 50 degrees. For the angle of refraction, I got answers somewhere between 25.7 and 26.5 degrees.

All of this gave me an index of refraction for this class as being between 1.723 and 1.760.

This gave me a list of quite a few different glasses in several catalogs (two from Schott and one from Bausch & Lomb).

Unfortunately, there is no glass with a density between 2.80 and 3.00 g/cc that has an index of refraction in that range.

None.

So, either we have a disk of unobtanium, or else we did some measurements incorrectly.

I’m guessing it’s not unobtanium.

I’m also guessing the error is probably in our weighing procedure. The bathroom scale we used is not very accurate and probably got confused because the glass doesn’t have two feet.

A suggestion was made that this might be what Bausch and Lomb called Barium Flint, but that has an index of refraction that’s too low, only 1.605.

Mystery is still unsolved.

Can Mathematicians be Replaced by Computers?

30 Sunday Aug 2020

Posted by gfbrandenburg in education, Math, teaching

≈ Leave a comment

Tags

artificial intelligence, computers, conjecture, induction, logic, mathematicians, mathematics, proof

The short answer is, certainly not yet.

Can they ever be? From reading this article and my own experience with the geometry-proving-and-construction software called Geometrix, written by my friend and colleague Jacques Gressier, I am not sure it’s possible at all.

Here is an interesting article that I’m copying and pasting from Jerry Becker at SIU, who got it from Quanta:

From Quanta, Thursday, August 27, 2020. SEE
https://www.quantamagazine.org/how-close-are-computers-to-automating-mathematical-reasoning-20200827/
**************************
How Close Are Computers to Automating Mathematical Reasoning?
AI tools are shaping next-generation theorem provers, and with them the relationship between math and machine.
By Stephen Ornes
In the 1970s, the late mathematician Paul Cohen, the only person to ever win a Fields Medal for work in mathematical logic, reportedly made a sweeping prediction that continues to excite and irritate mathematicians – that “at some unspecified future time, mathematicians would be replaced by computers.” Cohen, legendary for his daring methods in set theory, predicted that all of mathematics could be automated, including the writing of proofs.

A proof is a step-by-step logical argument that verifies the truth of a conjecture, or a mathematical proposition. (Once it’s proved, a conjecture becomes a theorem.) It both establishes the validity of a statement and explains why it’s true. A proof is strange, though. It’s abstract and untethered to material experience. “They’re this crazy contact between an imaginary, nonphysical world and biologically evolved creatures,” said the cognitive scientist Simon DeDeo of Carnegie Mellon University, who studies mathematical certainty by analyzing the structure of proofs. “We did not evolve to do this.”

Computers are useful for big calculations, but proofs require something different. Conjectures arise from inductive reasoning – a kind of intuition about an interesting problem – and proofs generally follow deductive, step-by-step logic. They often require complicated creative thinking as well as the more laborious work of filling in the gaps, and machines can’t achieve this combination.

Computerized theorem provers can be broken down into two categories. Automated theorem provers, or ATPs, typically use brute-force methods to crunch through big calculations. Interactive theorem provers, or ITPs, act as proof assistants that can verify the accuracy of an argument and check existing proofs for errors. But these two strategies, even when combined (as is the case with newer theorem provers), don’t add up to automated reasoning.

——————-
SIDEBAR PHOTO:  Simon DeDeo of Carnegie Mellon helped show that people and machines seem to construct mathematical proofs in similar ways. Courtesy of Simon DeDeo
——————–
Plus, the tools haven’t been met with open arms, and the majority of mathematicians don’t use or welcome them. “They’re very controversial for mathematicians,” DeDeo said. “Most of them don’t like the idea.”

A formidable open challenge in the field asks how much proof-making can actually be automated: Can a system generate an interesting conjecture and prove it in a way that people understand? A slew of recent advances from labs around the world suggests ways that artificial intelligence tools may answer that question. Josef Urban at the Czech Institute of Informatics, Robotics and Cybernetics in Prague is exploring a variety of approaches that use machine learning to boost the efficiency and performance of existing provers. In July, his group reported a set of original conjectures and proofs generated and verified by machines. And in June, a group at Google Research led by Christian Szegedy posted recent results from efforts to harness the strengths of natural language processing to make computer proofs more human-seeming in structure and explanation.

Some mathematicians see theorem provers as a potentially game-changing tool for training undergraduates in proof writing. Others say that getting computers to write proofs is unnecessary for advancing mathematics and probably impossible. But a system that can predict a useful conjecture and prove a new theorem will achieve something new –  some machine version of understanding, Szegedy said. And that suggests the possibility of automating reason itself.
Useful Machines

Mathematicians, logicians and philosophers have long argued over what part of creating proofs is fundamentally human, and debates about mechanized mathematics continue today, especially in the deep valleys connecting computer science and pure mathematics.

For computer scientists, theorem provers are not controversial. They offer a rigorous way to verify that a program works, and arguments about intuition and creativity are less important than finding an efficient way to solve a problem. At the Massachusetts Institute of Technology, for example, the computer scientist Adam Chlipala has designed theorem-proving tools that generate cryptographic algorithms – traditionally written by humans – to safeguard internet transactions. Already, his group’s code is used for the majority of the communication on Google’s Chrome browser.

——————-
SIDEBAR PHOTO: Emily Riehl of Johns Hopkins University uses theorem provers in teaching students and proof assistants in her own work. “Using a proof assistant has changed the way I think about writing proofs,” she said. Will Kirk/Johns Hopkins University
———————-
“You can take any kind of mathematical argument and code it with one tool, and connect your arguments together to create proofs of security,” Chlipala said.

In math, theorem provers have helped produce complicated, calculation-heavy proofs that otherwise would have occupied hundreds of years of mathematicians’ lives. The Kepler conjecture, which describes the best way to stack spheres (or, historically, oranges or cannonballs), offers a telling example. In 1998, Thomas Hales, together with his student Sam Ferguson, completed a proof using a variety of computerized math techniques. The result was so cumbersome – the results took up 3 gigabytes – that 12 mathematicians analyzed it for years before announcing they were 99% certain it was correct.

The Kepler conjecture isn’t the only famous question to be solved by machines. The four-color theorem, which says you only need four hues to color any two-dimensional map so that no two adjoining regions share a color, was settled in 1977 by mathematicians using a computer program that churned through five-colored maps to show they could all be reduced to four. And in 2016, a trio of mathematicians used a computer program to prove a longstanding open challenge called the Boolean Pythagorean triples problem, but the initial version of the proof was 200 terabytes in size. With a high-speed internet connection, a person could download it in a little over three weeks.

Complicated Feelings
These examples are often trumpeted as successes, but they’ve also added to the debate. The computer code proving the four-color theorem, which was settled more than 40 years ago, was impossible for humans to check on their own. “Mathematicians have been arguing ever since whether or not it’s a proof,” said the mathematician Michael Harris of Columbia University.
———————-
SIDEBAR PHOTO:  Many mathematicians, like Columbia University’s Michael Harris, disagree with the idea that computerized theorem provers are necessary – or that they’ll make human mathematicians obsolete. Béatrice Antolin
———————–
Another gripe is that if they want to use theorem provers, mathematicians must first learn to code and then figure out how to express their problem in computer-friendly language – activities that detract from the act of doing math. “By the time I’ve reframed my question into a form that could fit into this technology, I would have solved the problem myself,” Harris said.

Many just don’t see a need for theorem solvers in their work. “They have a system, and it’s pencil and paper, and it works,” said Kevin Buzzard, a mathematician at Imperial College London who three years ago pivoted his work from pure math to focus on theorem provers and formal proofs. “Computers have done amazing calculations for us, but they have never solved a hard problem on their own,” he said. “Until they do, mathematicians aren’t going to be buying into this stuff.”

But Buzzard and others think maybe they should. For one thing, “computer proofs may not be as alien as we think,” DeDeo said. Recently, together with Scott Viteri, a computer scientist now at Stanford University, he reverse-engineered a handful of famous canonical proofs (including one from Euclid’s Elements) and dozens of machine-generated proofs, written using a theorem prover called Coq, to look for commonalities. They found that the networked structure of machine proofs was remarkably similar to the structure of proofs made by people. That shared trait, he said, may help researchers find a way to get proof assistants to, in some sense, explain themselves.
“Machine proofs may not be as mysterious as they appear,” DeDeo said.

Others say theorem provers can be useful teaching tools, in both computer science and mathematics. At Johns Hopkins University, the mathematician Emily Riehl has developed courses in which students write proofs using a theorem prover. “It forces you to be very organized and think clearly,” she said. “Students who write proofs for the first time can have trouble knowing what they need and understanding the logical structure.”

————————–
SIDEBAR:  By the time I’ve reframed my question into a form that could fit into this technology, I would have solved the problem myself.  Michael Harris, Columbia University
————————-
Riehl also says that she’s been increasingly using theorem provers in her own work. “It’s not necessarily something you have to use all the time, and will never substitute for scribbling on a piece of paper,” she said, “but using a proof assistant has changed the way I think about writing proofs.”

Theorem provers also offer a way to keep the field honest. In 1999, the Russian American mathematician Vladimir Voevodsky discovered an error in one of his proofs. From then until his death in 2017, he was a vocal proponent of using computers to check proofs. Hales said that he and Ferguson found hundreds of errors in their original proof when they checked it with computers. Even the very first proposition in Euclid’s Elements isn’t perfect. If a machine can help mathematicians avoid such mistakes, why not take advantage of it? (The practical objection, justified or not, is the one suggested by Harris: If mathematicians have to spend their time formalizing math to be understood by a computer, that’s time they’re not spending doing new math.)

But Timothy Gowers, a mathematician and Fields medalist at the University of Cambridge, wants to go even further: He envisions a future in which theorem provers replace human referees at major journals. “I can see it becoming standard practice that if you want your paper to be accepted, you have to get it past an automatic checker,” he said.

————-
But before computers can universally check or even devise proofs, researchers first have to clear a significant hurdle: the communication barrier between the language of humans and the language of computers.

Today’s theorem provers weren’t designed to be mathematician-friendly. ATPs, the first type, are generally used to check if a statement is correct, often by testing possible cases. Ask an ATP to verify that a person can drive from Miami to Seattle, for example, and it might search all cities connected by roads leading away from Miami and eventually finding a city with a road leading into Seattle.

———————–
SIDEBAR PHOTO: Not every mathematician hates theorem provers. Timothy Gowers, of the University of Cambridge, thinks they may one day replace human reviewers at mathematical journals. The Abel Prize
———————-
With an ATP, a programmer can code in all the rules, or axioms, and then ask if a particular conjecture follows those rules. The computer then does all the work. “You just type in the conjecture you want to prove, and you hope you get an answer,” said Daniel Huang, a computer scientist who recently left the University of California, Berkeley, to work at a startup.

But here’s the rub: What an ATP doesn’t do is explain its work. All that calculating happens within the machine, and to human eyes it would look like a long string of 0s and 1s. Huang said it’s impossible to scan the proof and follow the reasoning, because it looks like a pile of random data. “No human will ever look at that proof and be able to say, ‘I get it,'” he said.

ITPs, the second category, have vast data sets containing up to tens of thousands of theorems and proofs, which they can scan to verify that a proof is accurate. Unlike ATPs, which operate in a kind of black box and just spit out an answer, ITPs require human interaction and even guidance along the way, so they’re not as inaccessible. “A human could sit down and understand what the proof-level techniques are,” said Huang. (These are the kinds of machine proofs DeDeo and Viteri studied.)
ITPs have become increasingly popular in recent years. In 2017, the trio behind the Boolean Pythagorean triples problem used Coq, an ITP, to create and verify a formal version of their proof; in 2005 Georges Gonthier at Microsoft Research Cambridge used Coq to formalize the four-color theorem. Hales also used ITPs called HOL Light and Isabelle on the formal proof of the Kepler conjecture. (“HOL” stands for “higher-order logic.”)

Efforts at the forefront of the field today aim to blend learning with reasoning. They often combine ATPs with ITPs and also integrate machine learning tools to improve the efficiency of both. They envision ATP/ITP programs that can use deductive reasoning – and even communicate mathematical ideas – the same way people do, or at least in similar ways.

The Limits of Reason

Josef Urban thinks that the marriage of deductive and inductive reasoning required for proofs can be achieved through this kind of combined approach. His group has built theorem provers guided by machine learning tools, which allow computers to learn on their own through experience. Over the last few years, they’ve explored the use of neural networks – layers of computations that help machines process information through a rough approximation of our brain’s neuronal activity. In July, his group reported on new conjectures generated by a neural network trained on theorem-proving data.

Urban was partially inspired by Andrej Karpathy, who a few years ago trained a neural network to generate mathematical-looking nonsense that looked legitimate to nonexperts. Urban didn’t want nonsense, though – he and his group instead designed their own tool to find new proofs after training on millions of theorems. Then they used the network to generate new conjectures and checked the validity of those conjectures using an ATP called E.

The network proposed more than 50,000 new formulas, though tens of thousands were duplicates. “It seems that we are not yet capable of proving the more interesting conjectures,” Urban said.
————————–

SIDEBAR: [Machines] will learn how to do their own prompts.  Timothy Gowers, University of Cambridge
—————————-
Szegedy at Google Research sees the challenge of automating reasoning in computer proofs as a subset of a much bigger field: natural language processing, which involves pattern recognition in the usage of words and sentences. (Pattern recognition is also the driving idea behind computer vision, the object of Szegedy’s previous project at Google.) Like other groups, his team wants theorem provers that can find and explain useful proofs.

Inspired by the rapid development of AI tools like AlphaZero – the DeepMind program that can defeat humans at chess, Go and shogi – Szegedy’s group wants to capitalize on recent advances in language recognition to write proofs. Language models, he said, can demonstrate surprisingly solid mathematical reasoning.

His group at Google Research recently described a way to use language models – which often use neural networks – to generate new proofs. After training the model to recognize a kind of treelike structure in theorems that are known to be true, they ran a kind of free-form experiment, simply asking the network to generate and prove a theorem without any further guidance. Of the thousands of generated conjectures, about 13% were both provable and new (meaning they didn’t duplicate other theorems in the database). The experiment, he said, suggests that the neural net could teach itself a kind of understanding of what a proof looks like.

“Neural networks are able to develop an artificial style of intuition,” Szegedy said.

Of course, it’s still unclear whether these efforts will fulfill Cohen’s prophecy from over 40 years ago. Gowers has said that he thinks computers will be able to out-reason mathematicians by 2099. At first, he predicts, mathematicians will enjoy a kind of golden age, “when mathematicians do all the fun parts and computers do all the boring parts. But I think it will last a very short time.”
———————–
Related:
.  Machines Beat Humans on a Reading Test. But Do They Understand?  —  https://www.quantamagazine.org/machines-beat-humans-on-a-reading-test-but-do-they-understand-20191017/
.  Will Computers Redefine the Roots of Math?  —  https://www.quantamagazine.org/univalent-foundations-redefines-mathematics-20150519/
.  Symbolic Mathematics Finally Yields to Neural Networks  —  https://www.quantamagazine.org/symbolic-mathematics-finally-yields-to-neural-networks-20200520/
———————–
After all, if the machines continue to improve, and they have access to vast amounts of data, they should become very good at doing the fun parts, too. “They will learn how to do their own prompts,” Gowers said.

Harris disagrees. He doesn’t think computer provers are necessary, or that they will inevitably “make human mathematicians obsolete.” If computer scientists are ever able to program a kind of synthetic intuition, he says, it still won’t rival that of humans. “Even if computers understand, they don’t understand in a human way.”

*********************************************
--

A neat geometry lesson! And a rant…

13 Thursday Feb 2020

Posted by gfbrandenburg in education, flat, History, Math, Optics, teaching, Telescope Making, Uncategorized

≈ Leave a comment

Tags

apps, computer, computer-managed instruction, geometry, kaleidoscope, Math, Mirror, programs, reflection, school

Here is some information that teachers at quite a few different levels could use* for a really interesting geometry lesson involving reflections involving two or more mirrors, placed at various angles!

Certain specific angles have very special effects, including 90, 72, 60, 45 degrees … But WHY?

This could be done with actual mirrors and a protractor, or with geometry software like Geometer’s Sketchpad or Desmos. Students could also end up making their own kaleidoscopes – either with little bits of colored plastic at the end or else with some sort of a wide-angle lens. (You can find many easy directions online for doing just that; some kits are a lot more optically perfect than others, but I don’t think I’ve even seen a kaleidoscope that had its mirrors set at any angle other than 60 degrees!)

I am reproducing a couple of the images and text that Angel Gilding provides on their website (which they set up to sell silvering kits (about which I’ve posted before, and which I am going to attempt using pretty soon)).

At 72º you see 4 complete reflections.

When two mirrors are parallel to each other, the number of reflections is infinite. Placing one mirror at a slight angle causes the reflections to curve.

 

https://angelgilding.com/multiple-reflections/

===========

Rant, in the form of a long footnote:

* assuming that the teacher are still allowed to initiate and carry out interesting projects for their students to use, and aren’t forced to follow a scripted curriculum. It would be a lot better use of computers than forcing kids to painfully walk through (and cheat, and goof off a lot) when an entire class is forced to use one of those very expensive but basically worthless highly-centralized, district-purchased computer-managed-instruction apps. God, what a waste of time – from personal experience attempting to be a volunteer community math tutor at such a school, and also from my experience as a paid or volunteer tutor in helping many many students who have had to use such programs as homework. Also when I was required to use them in my own classes, over a decade ago, I and most of my colleagues found them a waste of time. (Not all – I got officially reprimanded for telling my department chair that ‘Renaissance Math’ was either a ‘pile of crap’ or a ‘pile of shit’ to my then-department head, in the hearing of one of the APs, on a teacher-only day.

Keep in mind: I’m no Luddite! I realized early on that in math, science, and art, computers would be very, very useful. I learned how to write programs in BASIC on one of the very first time-share networks, 45 years ago. For the first ten years that my school system there was almost no decent useful software for math teachers to use with their classes unless you had AppleII computers. We had Commodore-64’s which were totally incompatible and there were very few companies (Sunburst was one) putting out any decent software for the latter. So when I saw some great ideas that would be ideal for kids to use on computers to make thinking about numbers, graphs, and equations actually fun and mentally engaging, often I would have to write them my self during whatever free time I could catch, at nights and weekends. Of course, doing this while being a daddy to 2 kids, and still trying to teach JHS math to a full load of students (100 to 150 different kids a day at Francis Junior High School) and running a school math club and later coaching soccer. (I won’t say I was a perfect person or a perfect teacher. I believe I learned to give better math explanations than most, didn’t believe that you either have a ‘m,ath gene’ or you don’t, at times had some interesting projects, and at times was very patient and clear, but had a terrible temper and often not good at defusing things. Ask my kids or my former students!) Later on, I collaborated with some French math teachers and a computer programmer to try to make an app/program called Geometrix for American geometry classes that was supposed to help kids figure out how to make all sorts of geometric constructions and then develop a proof of some property of that situation. It was a failure. I was the one writing the American version, including constructions and tasks from the text I was currently using. There was no way I could anticipate what sorts of obstacles students would find when using this program, until I had actual guinea pig students to use them with. Turns out the final crunch of writing however many hundreds of exercises took place over the summer, and no students to try them on. Figuring out hints and clues would require watching a whole bunch of kids and seeing what they were getting right or wrong. In other words, a lot of people’s full time job for a long time, maybe paying the kids as well to try it out so as to get good feedback, and so on. Maybe it could work, but it would require a lot more investment of resources that the tiny French and American companies involved could afford. We would have really needed a team of people, not just me and a single checker.

I find that none of these computer-dominated online learning programs (much less the one I worked on) can take the place of a good teacher. Being in class, listening to and communicating logically or emotionally with a number of other students and a knowledgeable adult or two, is in itself an extremely important skill  to learn. It’s also the best way to absorb new material in a way that will make sense and be added to one’s store of knowledge. That sort of group interaction is simply IMPOSSIBLE in a class where everybody is completely atomized and is on their own electronic device, engaged or not.

Without a human being trying to make sense out of the material, what I found quite consistently, in all the computerized settings, that most students absorbed nothing at all or else the wrong lessons altogether (such as, ‘if you randomly try all the multiple choice answers, you’ll eventually pick the right one and you can move on to some other stupid screen’; it doesn’t matter that all your prior choices were wrong; sometimes you get lucky and pick the right one first or second! Whee! It’s like a slot machine at a casino!).

By contrast, I found that with programs/apps/languages like Logo, Darts, Green Globs, or Geometer’s Sketchpad, with teacher guidance, students actually got engaged in the process, had fun, and learned something.

I find the canned computer “explanations” are almost always ignored by the students, and are sometimes flat-out wrong. Other times, although they may be mathematically correct, they assume either way too much or way too little, or else are just plain confusing. I have yet to detect much of any learning going on because of those programs.

Silvering Mirrors, and More, at Stellafane

05 Monday Aug 2019

Posted by gfbrandenburg in astronomy, flat, History, Math, monochromatic, optical flat, Optics, science, teaching, Telescope Making, Uncategorized

≈ 3 Comments

For me, these were the two most significant demos at the 2019 Stellafane Convention in Springfield, Vermont:

(1) Silvering large mirrors, no vacuum needed

We had a demonstration by Peter Pekurar on how to apply a layer of Silver (metallic Ag, not aluminum) onto a telescope mirror, accurately, with a protective, non-tarnishing overcoat, that works well. I looked through such a scope; the view was quite good, and I was told that interferograms are great also.

What’s more, the process involves overcoating a mirror with spray bottles of the reagents, without any vacuum apparatus needed at all. Note: Silver coated, not aluminum coated. This is big for me because the upper limit at our club’s aluminizer is 12.5″, but some of us are working on larger mirrors than that; commercial coaters currently charge many hundreds of dollars to coat them.

You can find information on some of these materials at Angel Gilding. Peter P said he will have an article out in not too long. Here are a few photos and videos of the process:

IMG_4972

Finished mirror; notice it’s a little blotchy

 

IMG_4978

IMG_4978

 

IMG_4981

IMG_4981

IMG_4985IMG_4987

(2) Demo and links for Bath Interferometer (see http://gr5.org/bath )

How to set up and use a Bath interferometer to produce highly accurate interferograms of any mirror for many orders of magnitude less cash than a Zygo interferometer. As I wrote earlier, Alan Tarica had taken the lead on fabricating one at the CCCC – NCA ATM workshop, and we eventually got it to work, but found it rather frustrating and fiddly to use.

The presenter is a HS teacher, and it shows: he explains things very clearly! On his website ( http://gr5.org/bath ) you can get plans for 3-D printing the parts for the Bath device, if you have any access to a 3-D printer, so you can print the parts out for yourself. He also has links to vendors that are selling parts for it, such as certain small lenses, mirrors and beam splitters. He shows you where you can get them for very little money from Surplus Shed and such places. Or you can purchase his really inexpensive kits that he’s already 3-D printed for you. Plus parts for an XYZ stage, which you will need for fine focus. The whole setup (not counting mirror stand and two tripods, which he assumes you have access to already) is under $130.

I will need to look carefully at our setup as built almost completely by Alan, and see how it differs and what we would need to do to make it better. The problem is that there are lots of little, tiny parts, and many of them need to be adjustable. We saw him doing LOTS of little adjustments!

Before his talk, I had absolutely no idea how this (or similar interformeters) really worked. Now I understand: the interference fringes that we see are really contour lines – like we see on on a USGS topo map, only with the mirror tilted in one direction or the other. A big difference with the USGS topo map is that there, the contour lines (isohypses – a new word for me today) are often 10 feet to 100 meters apart. In interferometry, the contour intervals are either one or one-half lambda (wavelength of light) apart – a really tiny amount! We need that level of accuracy because the surface we are studying is sooooooo flat that no other measuring system can work. His explanation of this whole thing now makes perfect sense to me. And the purpose of the software (free!) is to un-slant the mirror and re-draw it using the countour-line information.

Beautifully clear explanation!

Caution: a friend who works professionally in optics told me his team had made three Bath interferometers, using cheap but good quality ebay xyz stages, and found that they were just too much trouble; so they borrowed a very expensive commercial interferometer (costing many tens of kilobucks) from another department and are using that instead. I’m not selling my house to get a Zygo interferometer!!! But I will try the Bath interferometer instead.

 

 

← Older posts

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • July 2025
  • January 2025
  • November 2024
  • October 2024
  • August 2024
  • July 2024
  • May 2024
  • April 2024
  • January 2024
  • December 2023
  • October 2023
  • August 2023
  • June 2023
  • May 2023
  • April 2023
  • November 2022
  • October 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • February 2022
  • January 2022
  • December 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • March 2021
  • December 2020
  • October 2020
  • September 2020
  • August 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • September 2019
  • August 2019
  • June 2019
  • May 2019
  • January 2019
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • May 2018
  • March 2018
  • January 2018
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • February 2017
  • December 2016
  • September 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • August 2015
  • July 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014

Categories

  • astronomy
  • astrophysics
  • education
  • flat
  • History
  • Hopewell Observatorry
  • Math
  • monochromatic
  • nature
  • optical flat
  • Optics
  • Safety
  • science
  • teaching
  • Telescope Making
  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • Guy's Math & Astro Blog
    • Join 53 other subscribers
    • Already have a WordPress.com account? Log in now.
    • Guy's Math & Astro Blog
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...