The Unabashed Academic

19 May 2016

Still a physicist! Thanks, Emmy Noether

Recently while browsing my FaceBook feed, I was tempted to take one of the BuzzFeed quizzes that regularly pop up. Usually, I'm immune to this kind of clickbait, not really being interested in "Which American Idol judge are you?" or "Which Game of Thrones character are you like?" (Though as a frequent traveler, I do often do the ones that ask, "How many states have you visited?" or "How many of the top 150 world travel sites have you seen?") This one asked, "Are you more of a physicist, biologist, or chemist?" This was clearly a quiz for scientists and, though I'm a lifelong physicist (practicing for 50 years), I've always been a "biology appreciator", collecting Wildlife Stamps as a boy, and reading Stephen J. Gould, E. O. Wilson, Konrad Lorenz, and lots of other as an adult. And for the past half dozen years or so, I've been holding many conversations with multiple biologists and learning some serious bio in the service of carrying out a deep reform on algebra-based physics to create an IPLS (Introductory Physics for Life Scientists) class – NEXUS/Physics. I wondered whether I had been sufficiently infected with biology memes to have gone over to the dark side.

I needn't have worried. As expected, I came out "Physicist". Their description of a physicist was one I liked and that describes my favorite physicists (and I hope me too): "You’re a thinker who loves nothing more than getting stuck into a good intellectual challenge. You love to read, and you’ve got so much information (useless and otherwise) stored in your brain that everyone wants to have you on their pub quiz team. Physics suits you because it lets you spend your time contemplating some of the smallest and biggest things in the universe, and tackle some really huge questions while you’re at it."

But I particularly found one item in the quiz interesting: "Select a real scientist." They offered three female scientists: Emmy Noether, Jane Goodall, and Rosalind Franklin. Although I assume that they matched Emmy to Physics, Jane to Biology, and Rosalind to Chemistry, I think of both Goodall and Franklin as biologists. I have read some of both of their work – one of Jane Goodall's books on chimpanzees (and I regularly contribute to her save the chimps foundation), and Rosalind Franklin's paper on X-ray diffraction from DNA crystals. I've never read any of Emmy Noether's original writings, but her work was introduced into my physics classes in junior year and had a powerful impact on my thinking about the world and about physics. That's what I want to talk about here.

[But first, I'm inspired to make one of my typical academic digressions about a topic I've been thinking about: the structure of biological research. Reading E. O. Wilson's memoir, Naturalist, clarified for me a lot of what I have been seeing in my recent conversations with multiple biologists. I refer to this as "the Wilson/Watson abyss". About 1960, E. O. Wilson and J. D. Watson were both new Assistant Professors in the Harvard Biology Department. Over the next few years they engaged in a fierce battle for the soul of biology. What were the key issues for biology research for the next few decades? E. O., a field biologist rapidly becoming the world's greatest expert on ants, argued vigorously for a holistic approach: looking at whole animals, their behavior, how they interacted with others and their environments. J. D., fresh off his success in deciphering the structure of DNA and offering a molecular model for evolution, argued vigorously for a reductionist approach: studying the molecular mechanism of biology and the genome. The result was a split into two departments, and, essentially, a victory for Watson. Although there is excellent research in both areas, for the past half century, the strongest focus has been on microbiology and molecular models. Premier biology research institutes are often entirely focused on molecular and cellular biology and far more funding goes into that area. I personally think this is a problem and that the critical biological problems for the next half century are going to be that we HAVE to understand the systemic aspects of ecology – both for our interaction with the planet and even for medicine (through consideration of the human as an ecosystem by including our microbiome and the implications of social and environmental interactions on it).
Of course this digression is inspired by the choices of Jane Goodall – a premier field biologist in the Wilson model (though she came through anthropology as a student of Louis Leakey), and of Rosalind Franklin – a premier biochemist in the Watson model (and her work was critical in allowing the Watson-Crick breakthrough).

An interesting point for another post, is to note that evolution is the bridge that spans the Wilson/Watson abyss. Evolution is not a hypothesis or even really a theory, but rather a conclusion that grows out of a number of fundamental principles based strongly in observation and experiment: heredity (through DNA and its copying mechanism), variation, morphogenesis (the building of a phenotype – the individual organism – from the genomic info), and natural selection. (One might choose a different set, but this is one I like so far.) The first lies firmly on the Watson side, the last on the Wilson side. You can't make sense of evolution unless you are willing to consider both ends.]

We now return to our main program. Why did I pick Emmy over Jane and Rosalind, both of whose work I have actually read and I think are immensely important?

The reason is because for me as a physicist, Emmy Noether's result was a total game changer for me in the way I think about physics, the epistemology of physics, and how the world works. To state her result crudely in a way that the non-mathematician might understand, Noether's theorem says:

If you have a system of interacting objects whose behavior in time is governed by a set of equations that have a symmetry, then you can find a conserved quantity.

By a "symmetry", she means that you can change something about your description that doesn't change the math. By a "conserved quantity" she means something you can calculate that doesn't change as the system changes through time. (Of course Noether's theorem is a mathematical statement and there are conditions and a process to find the conserved quantity, but that requires a lot of math to elaborate. I refer you to the Wikipedia article on Noether's theorem for those who want the details. Warning: It requires knowledge of Lagrangians and Hamiltonian – junior level physics.)

This is a little dense. Let's take an example or three to see just what it means.
Suppose I have a set of interacting objects – something like the planets in the solar system interacting via gravity, or a set of atoms and molecules interacting via electric forces. We can describe these interactions either using forces or energy. (These approaches can be shown to be mathematically equivalent, though each tends to foreground different ways of thinking about the system.) The key is that the interactions of the objects only depend on the distances between them. This means that I can choose any coordinate system to describe the system: I can put my reference point – the 0 of my coordinates or origin – anywhere I want. Whatever origin I choose, the distance between two objects is the difference of the positions of those two objects and when you subtract their positions to get their relative distance, the position of the origin cancels.

This is a symmetry. The equations that describe the motion of the system do not change depending on the position of the origin of the coordinate system. You can choose it as you like – and we typically pick an origin that will make the calculation simpler. This symmetry is called translation invariance. It means you can shift (translate) the origin freely without anything changing.

But what Noether's theorem shows is the symmetry doesn't just mean we are allowed to choose the coordinate system that makes the calculation simpler, it says there is a conserved quantity and it allows you to find and calculate it.
In the case of translation invariance, Noether's conserved quantity is momentum – in most cases, the product of the mass and velocity for each object. You calculate the momentum of each object in the system, add them up at one time, and for any later time you will always get the same answer, no matter how the objects have moved, even though the motions may be amazingly complicated – and may involve billions of particles!

This is immensely important and has powerful practical implications. One technical example is, "How can you figure out how protons move inside a nucleus or electrons move inside an atom?" In the case of protons, you don't actually know exactly what the force law between two protons is (though there are lots of models), but we are pretty sure that they only depend on the distance between them.* But we can shoot very fast protons at a nucleus. Sometimes they will strike a proton moving in the nucleus and knock it out. If we measure the momenta of the two outgoing protons, and since we know the momentum of the incoming proton, we can infer the initial momentum of the struck proton inside the nucleus using momentum conservation. We then do a lot of these scatterings and get a probability distribution for the velocities of protons inside the nucleus. 

Since we do know the force between electrons and the nucleus (the electric force), this technique is extremely powerful for studying the structure of atoms and molecules. While this seems rather technical, we'll see that there are even more important implications that providing a measurement tool for difficult to observe quantum systems.

Two other fairly obvious symmetries in our description of systems are:

  • ·            Time translation invariance
  • ·            Rotational invariance

The first, time translation, means that it doesn't matter when you start your clock (what time you take as 0 of time). This is true for most dynamic models in physics. Gravitational forces don't depend on time and neither do electrical ones. Since these are the two forces that dominate everything bigger than a nucleus, this symmetry holds for everything from atoms up to galaxies (where there are some as yet unsolved anomalies). Emmy's theorem says that due to the time translation symmetry there is a conserved quantity – in this case energy.

The second, rotational invariance, means that it doesn't matter in which direction you point your axes. You can take the positive x direction as being towards the north star or towards the middle star of Orion's belt. (You want your coordinates to be fixed in space, not rotating with the earth or you introduce fake forces like centrifugal force and Coriolis forces.) The conserved quantity that goes with this is angular momentum, another useful principle (though more complicated to use because of more vectors).

OK. That tells us what Noether's theorem tells us – about important conservation laws like (linear) momentum, energy, and angular momentum. But we learn about these in introductory physics classes without needing a sophisticated theorem. What does it add?

For me, it adds something deeply epistemological – something fundamental about what we know in physics and how we know it. It shows that two very different things are tightly related: how we are allowed to describe the system at a given instant of time without changing anything (where we can choose our space and time coordinates) – a purely static statement about what kinds of forces or energies we have – and how the system moves in time – a dynamic statement about how things change.

This is immensely powerful. This means that if I have created a mathematical model of a system and I find that energy is NOT conserved, I know that either I have made a mistake, or I have assumed interactions that change with time. If I find that momentum is NOT conserved, I know that I must have tied something to a fixed origin rather than to a relative coordinate between two objects.

Now this isn't always wrong or bad. If I have a particle moving through a vibrating fluid I might want to treat the fluid like a fixed time dependent potential energy field. What this will mean is that the energy of my particle will not be conserved and where the energy goes (into the fluid) will not be correctly represented in this model. 

A more common example is projectiles or falling bodies. Since the earth is so much larger than our projectiles we take the origin of our coordinates as a fixed point on the earth instead of taking the force as depending (as it actually does) on the distance between the center of the earth and the projectile. This means we won't see momentum conserved since we have fixed the earth. Momentum transfer to it will not be correctly represented. This might not matter depending on what we want to focus on.

But what Noether's theorem shows us is that there are powerful – and absolute – links between two distinct ways of thinking about complex systems: the structure of the mathematical models we set up to describe the evolution of systems and characteristics of how those systems evolve in time. And that the result can be something as powerful and useful as a conservation law blew me away. More, that we now know exactly what characteristics of a mathematical model leads to a conservation law! There is nothing analogous to this in biology or chemistry – except as it is inherited from Noether's theorem in mathematical models biologists or chemists build or as they use energy or charge conservation. But as far as I can tell they rarely pay attention to conservation laws – even when they might do them some good.

It also showed me that when you build mathematical models you occasionally hit the jackpot: you get out more than you thought you put in. Extensions of Noether's theorem to other symmetries have become a powerful tool in constructing new models of dynamics. Instead of trying to invent new force laws, we look experimentally for conservation laws, find symmetries that can give those conservation laws, and construct new dynamical models by putting together variables that fit the symmetry. This is the way much of particle physics has functioned for the past 50 years.


So that question on the quiz is probably the best selector of the "physicist" category. Goodall and Franklin both did essential and pivotal work in their fields; but Noether's was a core pillar for all of 20th century physics and for me, won hands down. Thanks Emmy!

12 March 2016

Congratulations, Bernie!

Congratulations, Bernie, on a surprise win in the Michigan primary! But my Bernie-phile friends: Please don't fall for the bad cognitive errors I've seen some supporters distributing in responses: binary and one-step thinking, and being misled by inapt metaphors.

First, "a win-is-a-win" carries a lot of associational baggage, some of which may be true but which is certainly worth some careful analysis, but it's a binary thinking error. In Michigan, Bernie beat Hillary by 1.5% of the vote. A win, right? But in delegate count – what matters in this primary election – Hillary took 70 and Bernie 67, increasing her lead. For the primaries and for the election as a whole, one needs to keep in mind that we live in a republic, not a democracy. That means we elect a representative government, and do not directly elect a president. Winning the total popular vote is not the point (just ask Al Gore) and this is reflected in both the Democratic and Republican primaries, though in different ways.

To see how this works, consider three districts of 10,000 voters. The winner of each district gets a delegate. Suppose candidate H wins two districts by 6,000 to 4,000 and candidate B wins one district by 9,000 to 1000. Candidate H gets a total of 13,000 votes, while candidate B gets a total of 17,000. A big popular vote margin for B (57% to 43%) but a win for H (2 to 1). While this feels unfair, it's a way of guaranteeing that the political process requires coalition building among diverse sub-populations. We're seeing this in Bernie and Hillary's struggle to get the votes of different ethnic groups, different age groups, and different economic classes.

In a parliamentary democracy with many parties, like in many European countries, this plays out by having to build coalitions among parties. In the USA with only two parties, the coalitions are built at this stage. I don't think this is a bad thing, as I think the strength of America is our ability to (sometimes gingerly) bring together many different viewpoints, ethnic groups, and cultures, and get them to live together in reasonable harmony without frequent tribal and inter-group violence (so far). (Sorry, Black Lives Matter, I'm not trying to belittle your legitimate claims about inter-group violence in the US, only to point out that while horrible it has not reached the level of open warfare and we seem to be finally bringing it into the open enough to possibly make some positive progress.)

Second, well, but "it's an unprecedented upset." This one-step thinking also carries a lot of associational baggage: it means "momentum"! Look at the derivative! That implies big change. Well, perhaps, but one learns in science that projecting derivatives is a tricky and unstable business. (See Mark Twain's quote on the growth of the Mississippi Delta.) Also, the "upset" depends on the difference between a poll and an election. An election is the event: its result is what it is (modulo errors, cheating, hanging chads, etc.) The poll is a sample that is much more akin to a measurement in physics. This plays quite well with stuff I teach in my physics class about measurement.

A measurement in physics is also a sample: an attempt to determine the property of something by "tasting" it – taking a little bit in a way that you can analyze the sample and not change the object being measured. Consider a thermometer as an example. When I'm poaching a salmon for a dinner party, I put a thermometer in my salmon poacher to test the temperature and find out how hot the water is. My students often assume "a measurement is a measurement and gives a true value", but it doesn't work this way. A measurement is simply a conjoining of two physical systems. What makes it a measurement is a set of theoretical assumptions about the process of their interaction. In the thermometer case, we assume:

·            The zeroth law of thermodynamics: Energy will move between two objects in thermal contact in a direction to equalize their temperature (thermal energy density). So energy flows from a hot object into a cold until they are the same temperature. This says we expect our thermometer to extract energy from the water until it is the same temperature as the water.
·            The probe does not affect the state of the measured object significantly: The thermometer removes some energy from the water and so reduces its temperature. We assume that it only takes a little and that reduction can be neglected. If I used my big poacher thermometer in an espresso cup to see if it was too hot, the temperature the thermometer reads would not be the original temperature of the coffee but something partway between.
·            The probe has a linear response: We calibrate our thermometers by placing them in melting ice and putting a mark 0 oC and then in boiling water and placing a mark at 100 oC. The bimetal in the coil (or the liquid in the thermometer) expands as it gets hot and shifts the marker on the dial. We assume that halfway between those points is 50 oC and so on, but that isn't necessarily the case. It could expand more when it's colder and slow down when it gets hotter.

Thermometers are carefully analyzed and can be trusted when used appropriately. (A similar analysis holds for voltmeters and ammeters.) But the point is: When we make a measurement it depends on theoretical assumptions about how our system is working.

What does this have to do with polls? Well, a poll is a sample. A few voters are chosen to stand for the full population. The sample is too small to be chosen randomly: the error would be too large. So typically polls begin with a model of the electorates demographics: who does the voting population consist of and which of those are likely to actually vote in the election. These are often based on previous similar elections. But Michigan has not held a truly competitive Democratic primary in a long time. 2012 Obama was unopposed. In 2008, Michigan tried to slip forward in time so as to be more important, and the DNC stripped half their delegates. Many of the candidates (including Obama) refused to campaign. The two previous primaries were caucuses.

So it may be that there is a tidal wave of surprise support for Bernie. But it could also be that the Michigan polls were based on crappy models. A failure of polling yes, but not representing a shift in support. The way we will tell is if somewhat similar states such as Illinois and Ohio that have had more recent contested primaries, and where primaries are held next week, also show significant underpolling for Bernie or not. I am willing to wait and see.

Third, I'm afraid I'm seeing a lot of "Cinderella underdog" metaphors; the idea that somehow the election is like a basketball tournament. You just have to keep winning the popular vote. But because of the electoral college this is a terrible metaphor and leads us astray. As Democrats we want to win the presidency. To do so we need a path to 270 electoral votes and since those states are almost all (except, I think Nebraska and New Hampshire) winner take all, it takes a careful analysis of an electoral strategy; how an where to devote resources to get out the vote – and which populations to concentrate on. This is where the great detail we are getting in the Democratic primaries can help us. And it is why "national polls" of one candidate against the other are, especially this early in the game, essentially useless. Not only do these show dramatic swings as the candidates face off against each other, they don't take into account the actual election mechanism.

If neither candidate gets a majority of the delegates as a result of the primaries (there are all those "superdelegates" or SDs), here's what I hope would happen. The SDs would all throw away their current commitments and turn to the Quants – the quantitative analysts who would make models of the presidential election based on various models of the electorate and the details of the primary results in the various states. There would be a spread (spray) of results – similar to what you see for paths of a hurricane – because of different assumption plus random factors. The SDs would then use their personal knowledge of their own districts to evaluate those models and make their choices. That seems to me a good reason to have SDs.

Maybe I'm dreaming to hope that things would work out this way and choose the best choice for the fall election based on a detailed analysis of what we have learned from the primaries, but I'm a bit afraid that the SDs would look to support their personal interests rather than the interests of the party. I'm sure that wouldn't be true of my SDs – representatives whom I voted for and like very much. It's just all those other folks you voted for!

In any case, I will actively support whoever appears to have the best likelihood of winning the actual election, based on a careful analysis of our country's complex voting problems, not based on my agreement with their program (Bernie 98% to Hillary 94%), nor on my assessment of who is likely to be a more effective president in practice (Hillary 4: Bernie 1). I am very dismayed at the direction the Republican party has been trending over the past 35 years and it seems to be getting worse and worse. (Full disclosure: I voted for Republicans in New York State Senate elections in the 1960's but have never voted for a Republican presidential candidate.)


So to my Bernie-phile friends who say he can win, I say, OK, show me! I'm watching!

23 November 2015

My teaching philosophy

I got my teaching position decades ago, long before anyone started to ask candidates to write a "Teaching Philosophy." I recently had to create one for an application for internal University funding. Despite having written about teaching for decades (I wrote a small book about it), I found it an interesting challenge to try to condense it all into a page-and-a-half.  For your amusement, here it is.
****************************************************************
My teaching philosophy is based on nearly 45 years of teaching students at the University of Maryland and more than 20 years of carrying out Discipline Based Education Research with students attempting to learn physics. It is also informed by my readings of the literature in education, psychology, sociology, and linguistics.

My teaching philosophy grows out of a few basic principles:
  • It's not what the teacher does in a class that determines learning, it's what the students do. Learning is something that takes place in the student. And deep learning – sense making – involves more than just rote. It involves making meaning: making strong associations with other things that the students already know and organizing knowledge into coherent and usable structures.
  • I can explain for you, but I can't understand for you. Students assemble their responses to instruction from what they already know – appropriately or inappropriately. This can lead to what appear to be preconceptions that are incorrect and robust. Note, however, that these may be created “on the fly” in response to new information that is being presented.
  • Students' expectations matter. The expectations that students have developed about knowledge and how to learn (epistemology), based on previous experiences with schooling, are extremely important. Their answers to the questions, "What's the nature of the knowledge we are learning? [e.g., facts or productive tools?] What do I have to do to learn it? [e.g., memorize or sense-make?]" may matter as much or more than the preconceptions they bring in about content.
  • Science is a social activity. I'm teaching science, and science is all about how we know what we know. This is decided not by some algorithm but by a social process of sharing results, mutual evaluation, peer review, criticism, and discussion. Presenting a set of results to be repeated back is not science. Learning to do science means learning to participate in scientific conversations.
These lead me to rely heavily on a number of fundamental teaching guidelines:
  1.   Minds on – Look for activities that will engage the student's thinking and relevant experiences, making connections to things they know and are comfortable with.
  2. Active engagement – Set up classes so that there is more for students to do, less listening.
  3. Metacognition – Encourage students to be more explicit about their thinking, planning, evaluating. As a teacher, be explicit about your thinking and why you are asking them to do what you are asking them to do.
  4. Enable good mistakes – Mistakes that you can learn from are "good mistakes." Set up situations where your students will learn to think about their thinking and how to debug their errors – but do it supportively with some but not too much penalty for errors.
  5. Group work – Create situations where students are expected to discuss scientific ideas with their peers, both in and out of class. And finally
  6.  Listen!To create the activities described above, you need to know how students are responding. Therefore, set up situations that will let you hear what students are thinking and doing.
These ideas lead to my using lots of explicit techniques in class, including: having students read text and submit questions before class, asking challenging (and sometimes intentionally ambiguous) clicker questions followed by discussions of "why" and "how do we know", facilitating lots of group discussion and "find someone who disagrees with you and see if you can convince them" as part of each class session. And encouraging students to ask for regrades on quizzes and exams, and offering second-chance exams, among others. 

My experience with all this leads me to three concluding overarching ideas.

Diagnosis – When I first began teaching (for the first 30 years or so), if a student asked me a question, it was my instinct to answer it. In doing so I was using my experience as "the good student" and had not transitioned to being "the teacher". I had to learn that being the good student was no longer my job. My job was not necessarily to answer the student's question, but rather to consider, "Why couldn't this student answer this question for him/herself despite my having taught the material in class?" My job is in part to diagnose the students' difficulty, not answer their question. That requires a dramatically different interaction with my students. And learning when to answer a question directly (sometimes the right thing to do) is subtle.

Respecting different perspectives – In the past five years, working closely with students from a different discipline than my own, I have learned that many views that seemed to me bizarre or just plain wrong, were actually well-justified in appropriate contexts. I have also learned from these same students that many of the approaches and results I took for granted and was used to teaching in my own discipline had hidden assumptions and required perspectives that were unnatural if not looked at with an expert's knowledge and the context of longer term implications and applications.

Responsive teaching – Everything comes together in a fundamental overarching and unifying guideline:

Listen to your students. Understand how they are interpreting and understanding (or misunderstanding) what you are teaching. Respect their views and what they bring to class, and respond by adjusting your instruction to match.

This doesn't mean giving up your own view of what you want to teach or want them to learn. It means developing a good understanding of where they are and how you can help them get to where you want them to be.

02 December 2014

Leopold Bloom and the Ontology of Cognitive Dynamics

As a result of some traveling, I didn't have a chance to get to the library and fill up with my usual relaxation reading of trashy mystery novels. I find them diverting and totally non-memorable. That's great! In a few years I can read them again and not remember how they turned out. I often read four or five a week.
I found myself with a lot of work to do with nothing to read to take breaks with. You can only do so many Crosswords puzzles. (Being on sabbatical doesn't mean you don't work – it means you work on the stuff you want to work on!) So I started perusing our collection of more serious novels to find something I had always wanted to read but had missed. Something interesting, but not too engrossing. When I pick up the really good stuff, I often get involved and read for four hours or more, blowing off the work I intended to do. I needed something that I could put down after 20 to 30 minutes of break. So, what's it going to be? Infinite Jest? Or Ulysses?
It was recently the birthday of the woman who published Ulysses, first as a serial, then as a bound book. (The books were confiscated and burned.) I learned about this from The Writer's Almanac the other day, so I decided to try Joyce's masterpiece.
Ulysses certainly seems to meet my requirements. It's interesting, but challenging – and pretty easy to put down. Joyce was one of the first to do a true "stream of consciousness" novel and it's been some time since I read another one. (Virginia Woolf – some years ago.)
Chapter one is about poet and philosopher Stephen Daedalus, who I remember from Portrait of the Artist as a Young Man. The stream of his thoughts are difficult. It feels like half the words are made up, and the other half are ones that I think are real but don't know – many in French, Latin, or German, well above my limited capabilities in those languages. But he's interested in interesting things.
Chapter two switches to a more prosaic character, Leopold Bloom. His thoughts run more to living in the moment and reacting to his context than to musing on deep issues like the transmigration of souls (metempsychosis – one of Daedalus' interests). He thinks about food, interacting with his cat, the people he meets in the street, sex. (Daedalus is interested in sex too but at a more poetic level.)
After two chapters, I've already found a number of things interesting about Ulysses. First, how true the stream of consciousness seems. My own stream of consciousness includes both Daedalus and Bloom kinds of thinking, and when I analyze my own thoughts, they really do the sort of thing that Joyce is transcribing. But second, how false it feels. The thoughts of Daedalus or Bloom both feel (in my response to reading them) choppy, disconnected, scattered. The thoughts in my own head feel (mostly) natural, coherent, flowing, despite looking similar if transcripted. Why the difference?
My suspicion is the key is personal meaning making. The hard part of explaining this is the critical question: "what does 'meaning' mean." It's a bit tricky – besides being self-referential. The definition I like best comes from reading semanticists and cognitive linguistics (Langacker[1], Lakoff, Fauconnier). The idea is that our concepts and thoughts are interpreted in terms of a large web of encyclopedic knowledge about the world we each live in. Meaning is an aura of associations – a subset of our world knowledge that we each activate in the moment to interpret an idea or concept.
This has a lot of implications that help me make sense of the world I see. First, it suggests that the meaning a given individual gives to an utterance, observation, something they hear or read, can depend strongly on context. Our interpretation of the context we are in (framing) controls what of our huge store of encyclopedic knowledge is primed – not necessarily in our conscious mind at the moment, but sort of "first in line" to get activated when a chain of associations is generated to run through our limited working memory.
Let me now turn back to the question of stream of consciousness. Reading a transcript of what might be an accurate rendition of Leopold Bloom's conscious thoughts (OK, LB is a fictional character, but you know what I mean!), Joyce is providing a transcript, but Bloom is not only streaming what the transcript says. He has presumably activated a whole set of associations with each one – and those are often associations I don't have. (This is even worse for me with Stephen Daedalus, since he is a contemplative Catholic and religion plays a huge role in many of his interpretations. There's a lot I'm missing here.) Bloom's chain is constructed with invisible links that his aura of associations make with each term. They provide the glue that sticks the pieces together and makes them feel coherent. When I interpret Joyce's transcript, I do so with my own mental transcript making my own meanings – and my auras of association don't always overlap enough with Bloom's that his stream feels coherent.
One thing this says to me is that "stream of consciousness" as a literary device leaves much to be desired. In his recent book, The Sense of Style, Stephen Pinker has a marvelous chapter that gives beautiful advice about writing clearly for good communication. (I highly recommend Chapter 3 for teachers as well![2]) The key idea is to structure your writing so that readers are given sufficient information to activate their interior contexts to create the intended meaning from your text. In stream of consciousness writing this becomes almost impossible. In Daedalus' stream, it is clear that local politics and theological issues of interest around 1900 significantly inform meaning for him. Hard for me to make this out without a scholarly "Handbook to Ulysses," – and I don't have enough interest in those issues to get one.
I conclude that communicating well with stream of consciousness is exceedingly difficult – particularly if one wants one's work to be perceived as meaningful in later generations. Too much needs to be explicated for your reader to both create the meaning you want and to make the flow of thought seem natural.
Now, those of you who know me know I'm not a literary critic. If you made it this far, you've been patiently waiting for me to get to the point. Here it is, 1000 words in. (Don't do this in a research paper!)
I am both a teacher and an education researcher. A lot of my research is qualitative. My data are often transcripts of videos of interviews, group problem solving, and focus groups. I often have to try to interpret what students are saying. I want to know not just what they say, but to go beyond the transcript and infer what meaning they are making. (I would normally have said "if any", but given the definition of "meaning" above, my students are always "making meaning", just not necessarily the kind of meaning I want them to.) My colleagues and I draw on a variety of tools to infer this – gestures, word choice, tone of voice, etc. – together with our understanding of the context and our everyday communication skills. Of course one must also bring a theoretical perspective on how to interpret what one sees, to transform an observation into a measurement.
For the interpretation of student responses there are two extreme theoretical orientations: knowledge-in-pieces theory (KiP) and theory-theory (θ2). The former views students as having lots of bits of "irreducible" knowledge or "primitives". These are the places where any reasoning chain [3] of
Claimdatawarrant = claimdatawarrant = claim
ends. A primitive is something like "unsupported objects fall" or "push harder and it will move faster". Of course, in physics, we create complex reasoning for these, but in "folk-physics" models of the world, these are things you learned as an infant by watching and testing how the world worked. They form the core of lots of our everyday thinking.
The KiP approach starts by assuming students tend to bring up individual primitives (or resources) and try to get by with that; or that they bring up an easily generated story composed of a few simple primitives (as in Kahnemann's "fast thinking" [4]). KiP researchers then try to analyze more complex patterns of reasoning and build up an understanding of "knowledge structures".
The θ2 approach starts by assuming students have a coherent theory of a phenomenon, and analysis is informed by this assumption. But what we as scientists see as a single coherent phenomenon or set of phenomena may be seen by students as being governed by different coherent (but more local) theories.
These two approaches start from opposite ends and move towards each other. We might imagine a continuum between these two extremes. Some student responses could be more towards one end than the other, but empirical observation might let us determine where on that continuum a particular student's response on a particular subject belongs.
I suggest that the situation is more complex than can be described by a single continuum and that my ramblings on reading Ulysses are relevant to seeing how.
My stream of consciousness story says that in anyone's thoughts there is a continual chain of sequentially associated items popping up and that while these may appear incoherent to an outside observer, to the one experiencing the chain, local meaning creates a sense of coherence in the local flow. But in this picture, the self-perception of coherence is about how thoughts are changing moment to moment, not necessarily about the long-term constant activation of a coherent theory summing up and managing a multi-minute long argument.
One may feel that one's own thoughts are coherent – and they may be – but I suggest that a personal feeling of coherence depends on a derivative (information local to a moment) rather than an integral (information over a long time scale) and may be misleading. This could be why a number of educational theorists I have conversed with feel strongly that one must begin by assuming coherence. It just feels that way from inside!
Of course when we are teaching physics to students, one of our long-term goals is for them to learn to build large-scale coherent arguments, with reasoning that reaches over many minutes, not just a step or two. Often, it looks to me as a teacher that many a novice physics student can't put three steps together without forgetting the first one!
When my research activity turns to analyzing a transcript of a student solving a physics problem, I'm often interested in their fine-grained stream of consciousness and the particular association that drives them in the moment. 
In a problem about Newton's third law (two interacting objects exert equal and opposite forces on each other), have they recalled Newton's second law (a = Fnet/m) or its folk-physics equivalent (an object moves in proportion to the force acting on it) and focused only on the force, ignoring the effect of different masses? In a problem on pressure in a liquid, have they focused on one variable (the depth), failing to be coherent about the implications of their choice of coordinate system on the sign of g? Is their response affected by locally activated epistemological resources, such as "trust my physical intuition" or "the authorities must know what they are talking about"? By affective responses: "This is scary" or "My intuition always disagrees with physics"? There are lots of local questions that are deeply interesting. [5]
That's all very KiP driven. But we do all have long term coherences in our everyday thinking. There are patterns and regularities that last over very long time scales. 
At age four, my daughter was able to sit in one place with a game or coloring book for an hour or more, totally engaged. If I watched closely, she may have been jumping from one idea to the next with what looked like little coherence, but often she was building a story, shifting and changing it, trying one thing then another until it felt right in the moment. And there was a long term frame – the story telling and the very fact that the activity was about story telling being coherent and persistent over a long time scale. My students also have even longer term coherences, over an entire semester regularly activating "My intuition always disagrees with physics and I should ignore it" at the first sign of trouble. My own long-term highly stable coherences include "always start with an equation you can trust."
So what is an appropriate ontology for thinking about our student's thinking? Should we pay more attention to the fact that thinking is often local and driven by short term coherences explicable using a KiP-like analysis? Or to the long-term framing and average patterns that appear and look more like θ2 when you step back and look at a coarser grain size?
Of course my answer is that you have to do both to get a complete picture. A nice example of this kind of "two-scale-doublethink" is provided in many-body quantum physics. I'll explain that in my next post.


[1] R. Langacker, Foundations of Cognitive Grammar (1987).
[2] Unfortunately, his Chapter 4 proceeds to violate most of the precepts in Chapter 3 and is almost incomprehensible. Maybe he intends it to be an "exercise for the reader" to figure out how to fix it. Very "active learning"!
[3] This chain is based on Toulmin's analysis of reasoning. Every claim must be supported by data, and the reason the data supports the claim is a warrant. But every warrant is also a claim, so, like a four-year old, we can continue asking "why" (demanding data and warrants) forever. This chain stops at primitives: Things we know from our everyday experience that we have no reason for. They are "just the way things are."
[4] D. Kahneman, Thinking Fast and Slow (Farrar, Strauss, & Giroux, 2011).
[5] A. Gupta & A. Elby, Int. J. Sci. Ed. 33:18 (2011) 2463-2488.