John Meuser's Notebook
These are clumps of raw unprocessed thought dredged into words and maybe even 
sentences.
They are organized as a stack i.e. the most recent note is at the top.

20151207T2002 My First Answer to a Stack Overflow Question
The question can be found here: 
http://stackoverflow.com/questions/30787453/can-you-pass-a-struct-fieldname-in-t
o-a-function-in-golang/34146666#34146666

It asked if it was possible to pass a field name as a value to a function. This 
is my solution:

Use type assertions on an interface value:

package main

import "fmt"

type Test struct {
    S string
    I int
}

func (t *Test) setField(name string, value interface{}) {
    switch name {
        case "S":
            t.S = value.(string)
        case "I":
            t.I = value.(int)
    }
}

func main() {
    t := &Test{"Hello", 0}
    fmt.Println(t.S, t.I)
    t.setField("S", "Goodbye")
    t.setField("I", 1)
    fmt.Println(t.S, t.I)
}





20151205T1743 Constructive Formalism
Constructive formalism reduces logic to mathematics by identifying a 
constructive (or finite) proof as a formal demonstration in a transformation 
calculus that is independent of predicate and propositional logic.





20151203T1404 Update
I've spent the vast majority of the time since my last major update here 
working with the Go programming language. It is absolutely beautiful and 
powerful. I've been so engaged by it that I've put aside almost everything else 
that I was previously working on, including this website.

There are two projects that I've been working on:
 building an Integrated Library System as a web app in Go
 redesigning this website

The current design of the website has always been a placeholder, but it is now 
in desperate needed of a complete and total overhaul. I was waiting to actually 
think about redesigning this website after I had a real project that required 
me to learn the ins and outs of developing for the web: the ILS is that project.

Using Go I have been able to build a simple web server that gives the user 
access to the tools needed to search, introduce, transform, and eliminate books 
and catalogues across libraries all through a simple web based interface.





20151122T1924 Why aren't your links hyper linked?
At the end of a lot of the pages that I keep on this website, I give references 
to the websites and books which informed the creation of that page.
I do not automatically transform a url pathname into a hyperlink because it is 
simple to use this to HIDE a link that redirects to a malicious or otherwise 
unintended site.
I know that I have hyperlinks on my front page, and I'm trying to figure out 
what the best thing to do with them is, but it's just one of many things that 
I'm working on at the moment, and is not "the main thing".
Although hyperlinks are considered an essential part of what makes the web "THE 
WEB", it is also an easily abused tool, and requires an unnecessary measure of 
trust on the side of the user.





20151111T1146 Contrary to Popular Belief, Computers are Bad at Math
Contrary to popular belief, computers are bad at math.
I mean this strictly: they are absolutely horrible at doing math.
This is an uncommon statement to make in today's world, where computers are 
thought of as math machines that let us do our taxes, play silly games, and 
write an essay or email.

Computers are good at one thing: following a very very very small collection of 
limited commands.
That a computer can "do math" simply by following a very very small collection 
of commands is a fantastic feat of human ingenuity.
Sadly, the belief that "doing mathematics" is a synonym for "following 
instructions" is both common and naive.
A clever person might think "I know where he's going with this: math is a 
creative thing and computers are bad at being creative".
You would be wrong.

Computers fail to easily manage even the simplest mathematical objects: numbers 
and numerals.
It is common to think of a computer as a sophisticated hand held digital 
calculator that turns commands like "2 + 2" into a set of instructions that 
produces the result "4".
For small numerals it is hard to see why computers could possibly be said to be 
"bad at math".
They certainly know how to perform a limited number of elementary arithmetical 
acts rapidly, and (as far as a common person can tell) accurately and precisely.
Sadly, to tell a computer how to to do arithmetic on integers whose absolute 
value is greater than 2147483648 immediately causes trouble.
This is because a CPU's arithmetic unit can only act on a fixed size word and 
must use clever methods of storing the digits for "big numerals" across a 
collection of words, each of which must be properly accounted for if used in an 
act of addition or multiplication.
Consequently, as computers evolve we will still suffer from the same 
deficiency, it's just that the numbers upon which we can perform primitive 
arithmetic acts on a computer will become ever larger.
This means that it might become more convenient to perform more and more 
arithmetic without having to deal with multiword representations of large 
numbers when using computers for common tasks, but when using them to "do math" 
there is no limit to our ability to imagine and work with very big numbers (and 
to work with them without loosing a single digit of information).

Try dealing with the decimal representations of rational numbers and you will 
find yourself in a world of hurt.
Something that seems simple to write out with pen and paper is capable of being 
mutilated beyond recognition on a modern computer.
Most people do not have to confront these problems directly because someone 
else has already done the work of tricking the computer into using its 
servitude to a limited number of instructions to imitate a "reasonable" attempt 
at performing a desired arithmetic act.

If you have followed me this far and still don't believe I am speaking sense, 
then I would suggest cracking open the second volume of Knuth's The Art of 
Computer Programming which is aptly titled "Seminumerical Algorithms".
If you are familiar with basic arithmetic (as most elementary students are), 
then you will be very surprised to see what must be done to get a computer to 
add, subtract, multiply, and (goodness forbid) divide.

I want to reiterate, that though computers are horrible at doing math, they can 
be made to do certain mathematical things very well, we just need a 
sufficiently clever person to figure out how to trick the computer into doing 
what we want with what little it can do.

Why do I bring any of this up?
A big part of my work on N is based on Goodstein's equation calculus which 
reveals WHY computers are horrible at doing the kind of math that is popular 
among humans these days.
In brief, the math that is popular among humans these days is not nearly as 
universal as was once thought.
To state it harshly, computers are forcing humans to confront difficulties that 
are only and solely the result of clinging to erroneous characterizations of 
"what math is".

This is not a surprise, rather it is even anticipated by Russell, who is 
actually a source of a great deal of our modern misconceptions (as well as 
being an essential part of modern conceptions of mathematics).
Specifically, Russell mentions in Chapter VIII of his Introduction to 
Mathematical Philosophy entitled "Infinite Cardinal Numbers" that "It cannot be 
certain that there are in fact any infinite collections in the world" which is 
an entry point to his anticipation of Goodstein's modern methods.
Furthermore, he gives as more detailed discussion in Chapter XIII entitled "The 
Axiom of Infinity and Logical Types" of how it is completely possible to 
perform all basic arithmetical acts by progressing in strictly finite stages 
(all terminology used here is unavoidably vague and informal, given the nature 
of this discussion).

Modern math is a mishmash of arithmetic, algebra, geometry, calculus, naive set 
theory, naive function theory, and a bit of first-order logic (but just a bit).
The concepts from arithmetic to calculus have been applied to a wide range of 
problems in math and science and as a result have become "indispensable" to our 
modern world.
A vast majority of the world population are capable of following an argument 
from arithmetic, but fail (horribly) to follow anything more "sophisticated" 
like an argument which uses algebra or geometry.
An even smaller group of people follow arguments using calculus, and fewer 
still anything beyond that.
When a common person thinks of a "logical argument" it is safe to say that, 
unless they are well educated, they are not thinking of the formal logics that 
have become so powerful over the past century.

It is only with the advent of Metamathematics as a "mature" discipline, that we 
have been able to correct the fixation on the reduction of mathematics to logic.
Furthermore, without the work of Rózsa Péter, the founder of Recursive Function 
Theory (and also a often overlooked WOMAN in mathematics) we would hardly have 
the metamathematical work of Kleene in the form found in his famous and 
essential book "Introduction to Metamathematics".
The principle importance of Kleene in the removal of logic from mathematics is 
in his system for arithmetizing metamathematics.
He is certainly not the first to do so, I believe that accomplishment goes to 
Hilbert and Bern's in their "Grundlagen der Mathematik".
Rather, he was the first to present a simplified form capable of being applied 
to a wide range of metamathematical problems (specifically those associated 
with general recursive functions).

As this is a notebook I have to end my thoughts here for other matters require 
my immediate attention.





20151110T1535 Arrays, Nodes, Items, Atoms, and Words
The linear list, or one-dimensional array, is the first data structure with 
which most people are aware.
In computer science, the linear list is 

This is a notebook and as such contains incomplete notes, this is one of them.





20151108T1328 How I Solve Problems: An Example, Remove Specified Characters
I solved the following problem using Java and I thought I would write about it 
here:

 Make an efficient method that eliminates characters from an ASCII string.

This is a common problem, at least common in the sense that it's a simple 
little question you might be asked in order to prove that you have the ability 
to write a program that runs and accomplishes a mildly trivial task.
I like little problems like this: they are kind of like solving a Sudoku puzzle 
(something I used to do a lot of during my math classes in high school).

My primary reason for being interested in this problem is that the solution 
which came to my mind was deeply inspired by my work on my language N as well 
as my experience with J and k.
First, whenever I solve a problem I do my best to follow (rigorously) the 
general problem solving method created by Polya:

Understand the problem.
Make a plan.
Carry out the plan.
Look back.

To begin to understand a problem you must ask yourself three things:

What is the unknown?
What are the data?
What are the constraints?

If you can not identify the unknown of a problem then you have not even begun 
to solve it.
It is knowledge of the unknown that separates an unsolved problem from a solved 
one.
If you identify the unknown and it is familiar to you, then often that means 
that you have solved the problem before, and that you can draw on that 
experience to solve the problem at hand.
If you do not have a clear description of the unknown, something that you've 
written or described using your own language, then you must work towards 
getting this most basic bit of understanding before jumping into making plans, 
much less carrying out those plans.

Here the unknown is, in its most general form, a method or algorithm.
It is an answer to the question "What are the constraints?" that gives insight 
into what makes the unknown algorithm unique.
That is, there are plenty of algorithms, but there are only a few that satisfy 
the constraints given to us.
The principle constraint is that we must make an algorithm that eliminates 
characters from a string.
As an additional constraint we are asked to make the method "efficient".

Now, by identifying the unknown and combining that knowledge with the 
constraints we go from searching for an algorithm out of the unlimited number 
of algorithm out there, to finding only those algorithms which eliminate 
characters from a string, and among those we must find an algorithm that is 
"efficient".

The goal of eliminating characters from a string is less vague than the goal of 
making such a method "efficient".
It is often the vagueness of the term "efficient" that can lead to wildly 
different solutions to the same general algorithmic problem.
Some people might imagine efficiency as a purely temporal or spacial problem: 
don't take too much time or don't take up too much space.
Some people think of efficiency as a spatiotemporal problem: don't take up too 
much space-time.

Most people forget that efficiency is not simply a computational property: it's 
a property that is so vague as to include a wide range of meaningful "metrics".
We could measure efficiency using a "maintenance metric", that is, a 
measurement of the amount of energy or effort needed to maintain the code along 
side a vast collection of interwoven methods scattered across servers 
throughout the globe.

Whenever you are asked to make something, anything, "efficient" you are 
immediately forced to seek out more specific design constraints.
In most contexts you can assume that it's "good" to write code that not only 
runs quickly without taking up too much space, but also makes this fact obvious 
to anyone who reads your code.
That is, you not only want to guarantee that your code runs without using too 
much time or space, but also that it is easy for someone to come along and see 
just why it is that your code "should" run fast without using much space.

Sadly, even the most well written code in the world does not guarantee that the 
method will ultimately perform well alongside all the other tasks being 
completed by other methods competing for the same or nearby computational 
resources.
This is one of the reasons that testing and experimentation are unavoidable and 
invaluable when developing something that must be "efficient".
No matter what you feel you code should do, you must ask yourself "what are the 
facts and what are the truths that the facts bare out".
This is often where the answer to the question "What are the data?" comes into 
play.

The word "data" is a neutral term which is often just a sophisticated way of 
referring to  "the facts".
In this problem the most obvious fact is that we're working with ASCII 
characters only.
This might also be included as a constraint, but sometimes it's hard to 
separate the two, sometimes it's not.
The fact that we're working only with strings having ASCII characters is a 
valuable one.
This immediately makes it easier to write an "efficient" algorithm.

There are 128 ASCII characters that are often interpreted as numerals from zero 
to one hundred and twenty seven.
In most modern computers numerals are represented using a form of binary 
positional numeral notation.
As a result, all 128 ASCII characters were encoded as seven bit binary digits 
with 0000000 being the first ASCII character and 1111111 being the last.
In today's computing world it is most common for the words of a computer to be 
8, 16, 32, or 64 bit quantities.
So ASCII is commonly put into an 8 bit word where 0000 0000 is the first ASCII 
character and 0111 1111 is the last.
The remaining numerals from 1000 0000 to 1111 1111 are often used when parsing 
programming languages that use the ASCII character set in their source code 
e.g. the digraphs of J are treated as single "characters" in Hui's 
implementation of J.

Ultimately, the relation between ASCII characters and our solution to this 
problem is that since there are only 128 ASCII characters in total, we can 
easily using a boolean array to make quick decisions as to whether a character 
in a string should or should not be removed.
(If we had more time to solve this problem we might wish to look up Bloom 
Filters or other related structures for encoding the relation of belongingness 
within a computer or the digits of a binary numeral).

The plan thus far is:
 Let s be an array of characters representing the input string.
 Let r be an array of characters representing the characters to be removed from 
s.
 Let f be a binary array (boolean array) of length 128 each of whose atoms is 0.
 Note, s is for string, r is for remove, and f is for flag.
 For each character c in r change the c-th atom of f to 1.
 Thus, if f[c] is 1 then c is a character that should be eliminated, otherwise 
f[c] is 0 and c is a character that should be kept.

Now all that is left to do is to go through each character c in s and decide 
whether it should be eliminated based on the value of f[c].
If none of the characters in s are in r then we would simply return s.
So, if N is the number of characters in s then we must return an array of N 
characters.
We might want to create an array of N characters to which we will copy only 
those characters of s that are not in r, but this would be a waste of space for 
we already have an array of characters created that can be used to build the 
output string, namely s.

Suppose we created an output array of N characters, call it x.
Let the vowels (excluding the letter 'y') be the characters to be eliminated 
from the string "Hello World." .
To make the discussion concrete, we will use Java:

char[] s = "Hello World.".toCharArray();
char[] r = {'a','e','i','o','u'};
char[] x = new char[12]; // there are twelve characters in s

It is part of Oracle's implementation of Java that the default value of a 
character (and hence the characters in a new character array) is the unicode 
sign denoted by '\u0000'.
Although s does not contain the character '\u0000' it corresponds to the NUL 
ASCII character with numeric value 0000 0000.
As a result one can not simply return x after having copied only those 
characters not in r from s to x.
The returned string must be a slice of x.
Each time when we copy a character from s to x that is not in r we must keep 
track of what slice of x corresponds to the desired result.
The index of the leftmost atom in the relevant slice of x is always 0 no matter 
how many characters are transferred from s.
The index of the rightmost atom of the relevant slice of x starts at 0, but is 
incremented each time a character is copied from s to x by inserting it on the 
right.
This inspires the following method of copying.

First we set up the flags.

boolean[] f = new boolean[128]; // there are 128 ASCII characters 
for(char c : r) {
 f[c] = true;
}

Now we copy only those characters of s that are not in r using the boolean 
array of flags f.

int rightmost = 0;
for(char c: s) {
 if(!f[c]) {
  rightmost++;
  x[rightmost] = c;
 }
}

Finally, we need only return that slice of x which contains the characters in s 
not in r.

return new String(x, 0, rightmost);

Now that we have a working solution we can make it "efficient".
Notice that if s is very long then it is wasteful to create a whole new array 
of characters the same length as s, as we did with x in the method just 
presented.
Rather, we can use s as follows:

int rightmost = 0;
for(char c : s) {
 if(!f[c]) {
  rightmost++;
  s[rightmost]=c;
 }
}

The principle reason this method works is because the index of the character 
being examined in s is always after the index rightmost.

Here is a complete example implementation of a solution together with a main 
method to run it all.

public class temp{

 public static void main(String[] args){
  String x = "Hello, this is a sentence.";
  String y = "aeiou";
  System.out.println(x);
  System.out.println(y);
  String z = removeChars(x,y);
  System.out.println(z);
 }

 public static String removeChars(String str, String rmv){
  char[] s = str.toCharArray();
  char[] e = rmv.toCharArray();

  boolean[] f = new boolean[128]; // assumes ASCII
  for(char c:e) f[c]=true;

  int R = 0;
  for(char c:s)
   if(!f[c])
    s[R++]=c;

  return new String(s,0,R);
 }
}



20151108T1249 Notes from now.html
This is a record of what I had originally put into my list of things that I am 
working on at the moment.
While I'm still interested in these things I am not focusing on them at the 
moment, and would rather reformulate them then put them back into the 
"next.html" page.
My structure of new, now, next is something which has worked well, but which 
could do with more frequent updates and a finer level of annotation and 
tracking.
The new page links directly to my github commits to jmeuser.github.io which 
gives the most accurate record of what is new with me.
Though I do maintain a private notebook, I put much of that private material in 
this public notebook, or vice versa (mostly for recording purpose).
At some future point in time I will give a more detailed description of my 
organization system based on the "new, now, next" method I've been developing.

IT and "The Cloud"
Freedom
 What is "freedom in the cloud"?
 What should it be?
 Why is it unavoidably necessary?
Standards
 what standards are there?
 what do we want from a "cloud standard"?
 what do we do with "the cloud" and what ideal purpose might a standard serve?
Public Cloud Venders
 who are they?
 what do they give?
 how do they give it? 
DevOps
 define
 analyze its components





20151107T1402 Review a Language by Solving Problems
It might seem obvious to most people, but it's surprising that few are willing 
to review a programming language by simply solving problems.
Rather, they attempt to cheat themselves by leafing through a few pages of a 
fast "primer" or pocket reference and think that will get them ready to go for 
a programming interview.
Without giving time and attention to implementing a working/running solution to 
a good problem you miss the chance to get your head back into the language 
you're interested in.

When you're solving a difficult problem in math, something that might be worthy 
of publication for example, there comes a point when you go from being 
"outside" the problem to being "inside" the problem.
From the outside of a problem you can only see its surface features, you only 
get a glimpse at what might hide underneath that surface.
The most challenging part of solving a difficult problem is properly surveying 
its surface for an appropriate access point.
You want to find a part or point where you can dig deep without waring yourself 
out, or maybe even crack open the whole problem in one well placed strike.

Once you start to get inside a problem you have a new set of problems: 
everything you see is only from inside the problem.
This can make it hard to see how something else might relate to what you're 
working on at the moment.
It is for this reason that most people say "have at least two problems you're 
actively working on" because when you switch from one to the other you help to 
remind yourself that there's more to the world than just the single problem 
you're actively trying to solve.
On the other hand, there is something to be said for being able to forget the 
world for a while and just wrap yourself in a problem until you know it as you 
might a good friend.

In the end, there is no substitute for concrete experience.
Experience with problem solving is mostly a practice in patience and failure.
To solve a problem requires you to stretch past your limits: if it doesn't then 
you already knew the solution or you're delusional.
When people say that it's not the destination but rather the journey that 
matters, they are usually referencing the growth they experience as a result of 
struggling with their own limitations.
Runners call it "the wall", but for mathematicians every step towards a 
solution to a problem is a wall, the whole process is a huge wall, otherwise it 
wouldn't be a problem in the first place.
If you or someone else had a solution then you would walk that path, no 
obstacles, it's solved: the struggle is not real.

There is a difference between studying a path that someone has forged for you, 
and struggling to make your own path.
It helps to have someone give you a well worn path disguised as a thicket.
If you struggle on your own and can't seem to make progress then the challenger 
can reveal where you might find your next success, otherwise it can be almost 
impossible at times to know what is a success and what is a failure.

In the real world, there is no clear distinction between success and failure.
One person's success is another's failure and vice versa.
This is not a trivial statement, it is genuine wisdom.
You must find it within yourself to ignore the negative connotations attached 
to failure: once you do then it will no longer matter whether something "is a 
success" or a "failure" rather, all that will matter is what you do with your 
experience.





20151104T1445 Changing the format of hr.html
I desperately need to update what is already at hr.html in order to put it into 
a form that is more consistent across the different subdivisions of problems 
that I have solved there.
For now I've decided to include the nested headings that are given in the 
navigation bar to the Hacker Rank problems that I solve.




20151104T1442 Installed Java 8 JDK

I downloaded the Java SE Development Kit 8 from this website: 
http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.h
tml

Specifically I selected the Mac OS X x64 option that had a file size of 
227.14MB and was under the boxed heading “Java SE Development Kit 8u65”.

I had originally just downloaded the JRE 8 update 65 thinking that it would 
give me access to the facilities needed to compile and run a java program, but 
it did not. It was only that part of java that is explicitly needed in order to 
RUN a java program that has been compiled to Java bytecode.



20151102T1519 Old Notes from math.html

Math is a story.
Its plot gives power.
Its morals give wisdom.
This is my telling of the math story.

I've added little, perhaps nothing, to the math story.
I only retell the stories I've heard.

The storytellers I've learned from are

Bertrand Russell
R. L. Goodstein
Saunders MacLane
Stephen Cole Kleene
Donald E. Knuth
Nicolas Bourbaki
Tian-Xia He
David R. Larson
Melvyn Jeter
Zahia Drici
Joshua Brown-Kramer
Edmund Landau
R. L. Moore
Paul R. Halmos
Kenneth E. Iverson
Harold Scott MacDonald Coxeter
David Hilbert
John L. Kelly
Serge Lang
Per Martin-Löf
James Raymond Munkres
George Pólya
Rózsa Péter
Abraham Robinson
Walter Rudin
J. R. Shoenfield
Alfred Tarski
Names are added as they occur to me.
For now, I'm retelling Goodstein's 'Fundamental concepts of Mathematics'.

Introduction of Numerals

A numeral is an occurrence of a zero or an occurrence of a successor of a 
numeral.
A numeral system is a collection of methods for introducing an occurrence of 
zero and introducing the successor of a numeral of the system.

Our first numeral system uses these rules:
1 write '0'; and
2 write '1+ 0' for '0' in a numeral.
Rule 1 gives the zero numeral and rule 2 gives the successor of a numeral of 
this system.

Here are example applications of these rules:

--- 1
 0


-------- 1
 0
-------- 2
 1+ 0
-------- 2
 1+ 1+ 0


 1+ 1+ 0
------------------ 2
 1+ 1+ 1+ 0
------------------ 2
 1+ 1+ 1+ 1+ 0
------------------ 2
 1+ 1+ 1+ 1+ 1+ 0


Consequently,
0
1+ 0
1+ 1+ 0
1+ 1+ 1+ 0
1+ 1+ 1+ 1+ 0
1+ 1+ 1+ 1+ 1+ 0
are occurrences of numerals of this system.

It is possible to begin by constructing the familiar Hindu-Arabic positional 
numeral system.
The apparent complexity of the rules needed to do so dissuades their immediate 
introduction.
Alternatively, it can be introduced as an abbreviation for numerals of the 
current system.
Rather than give the entire system of abbreviation in full it will be given as 
needed.

---
 0

 0
-----
 S 0


Introduction and Elimination of Pronumerals

---   ---   ---
 x     y     z

 x     y     z
---   ---   ---
 0     0     0

 x       y       z
-----   -----   -----
 S x     S y     S z


Repeat

 f^0 y
-------
     y

 f^(S x) y
-----------
 f f^x y


Introduction and Elimination of Addition

Addition is repeated succession.


 x | y
---+---
 x+ y

 0+ y     (1+ x)+ y
------   -----------
    y      1+ x+ y

or

 x+ y
-------
 S^x y


Introduction and Elimination of Multiplication

Multiplication is repeated addition.

 x | y
---+---
 x* y


 0* y    (S x)* y
------  ----------
    0    y+ x* y

or

  x* y
--------
 y+^x 0


Exponentiation

Exponentiation is repeated multiplication.

exp: {x*^y 1}


Introduction and Elimination of Predecessor

 0
-----
 P 0

 S x
---------
 P S S x


 P 0
-----
   0

 P S x
-------
     x


Monus

Monus is repeated predecession.

 x- y
-------
 P^y x

S:   successor
P:   predecessor

sum: {$[x=0;y;S x sum P y]}       Recursive
sum: {S^x y}                      Iterative

prod:{$[x=0;0;y sum x prod P y}   Recursive
prod:{(x sum)^y 1}                Iterative

exp: {$[y=0;1;x prod x exp P y}   Recursive
exp: {(x prod)^y 1}               Iterative

dif: {$[y=0;x;P x dif P y]}   Recursive
dif: {P^y x}                  Iterative

alt: {$[x=0;0;1 dif alt x]}   Recursive
alt: {(1 dif)^x 0}            Iterative

hf:  {$[x=0;0;hf+alt x]}                    Recursive
hf:  {1# ({[u;v] P u; v sum alt u}^x)[x;0]} Iterative

rt:  {$[x=0;(rt x)+ 1- (S x)- (exp 2) S rt x]}         Recursive
rt:  {$[1- (S x)- (exp 2) S rt P x; rt P x; S rt P x]} Recursive


Copying

All of math and science are born from and reducible to imitation and repetition.
Imitation is the behavior of performing similar behaviors.
It is possible to condition the performance of similar behaviors without 
observing imitation.
Thus, the occurrence of similar events of human behavior is not necessarily 
imitation.

Prior to similar events of human behavior were occurrences of similar events.
The extent to which we can clearly and exactly identify and describe the 
occurrence of similar events is often how we measure progress in math and 
science.

Copying is a type of imitation.
Copying is used to record events.
Some consequents of copying are more similar than others.

Abstraction, as an activity, is born from errors in imitation.

Similarity may also be seen as errors between occurrences of events.
Without errors between events, similarity would reduce to sameness making the 
occurrence of any event indistinguishable from the occurrence of another.
That different events occur seems to be basic.


Counting Small Collections

Counting is a foundation and origin of math.
To count is to name a tally.
A tally is a copy of a collection in marks.

  apple        orange       pear    collection
---------   -----------   -------   copy
.           | |       |   *   * *   tally 
  . . . .       |   |       *
                  |

A tally, being a collection of marks, can be copied.

.           | |       |   *   * *
  . . . .       |   |       *
                  |               collection
---------   -----------   ------- copy
|   |   |     * * * *     .       tally
  |   |     *         *     .
                              .
                                .

A tally tallies itself.
Systematic methods of tallying turn into counting.
Counting, as we know it, is an efficient way to record and communicate acts of 
imitation or, specifically, copying.


Counting Large Collections

Use an abacus.


Numerals

Numerals are introduced to abbreviate counting.
Some numeral systems are designed so that more complex mathematical activities 
can be performed quickly and consistently.

Modern decimal notation using Hindu-Arabic numerals has replaced almost all 
previously created numeral systems.
This is due primarily to its use in performing the fundamental algorithms of 
arithmetic:
addition;
multiplication;
subtraction; and
division.

The conceptual origins of the Hindu-Arabic numeral system is in the use of the 
abacus to count large collections.

The act of copying an object from a collection in the acts of moving beads is 
encoded in a Hindu-Arabic numeral.

The numeral 123 abbreviates the events between the following antecedent and 
consequent of counting with an abacus

 ===============
   o    o    o  
   o    o    o
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o     
   |    |    |
 ===============  antecedent
----------------- intermedient
 ===============  consequent
   o    o    o  
   o    o    o
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    | 
   o    |    o 
   |    o    o     
   o    o    o
 ===============

Whatever meaning we give to numerals and their use to count or record counts 
comes from the inference or induction we make from our records of events to the 
events that may have brought about those records.

By abstracting from counting with an abacus we can examine numeral independent 
of its occurrences or interpretations.
We do this to identify the unavoidable constraints on any numeral system we 
might decide to design in the future.

In anticipation of addition the notion of numeral is encoded with a verb "next 
numeral" denoted by '1+' and an initial noun zero denoted by '0'.

Here 1+0 might be interpreted as the following abacus events:

 ===============
   o    o    o  
   o    o    o
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o     
   |    |    |
 ===============  0   antecedent
----------------- 1+  act
 ===============  1+0 consequent
   o    o    o  
   o    o    o
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    |     
   |    |    o
 ===============

Another example, now with 1+1+1+0

 ===============
   o    o    o  
   o    o    o
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o     
   |    |    |
 ===============  0   antecedent
----------------- 1+  act
 ===============  1+0 consequent
   o    o    o  
   o    o    o
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    |     
   |    |    o
 ===============  1+0   antecedent
----------------- 1+    act
 ===============  1+1+0 consequent
   o    o    o  
   o    o    o
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    | 
   o    o    o     
   |    |    o
 ===============  1+1+0   antecedent
----------------- 1+      act
 ===============  1+1+1+0 consequent
   o    o    o  
   o    o    o
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    o 
   o    o    | 
   o    o    o 
   o    o    o     
   |    |    o
 ===============

In general, a numeral is any system sufficiently similar to the following rules


--- Introduce Zero
 0

 0
--- Introduce Next Numeral
1+0

These rules are purely constructive, that is they are not conservative i.e. do 
not have symmetric elimination rules.

An example sequence of events following these rules:

----- Introduce Zero
  0
----- Introduce Next Numeral
1+0
----- Introduce Next numeral
1+1+0

Notice, these rules are all instances of copying with systematic errors.
The rule "introduce zero" is a command to copy nothing while introducing an 
error that looks like '0'.
The rule "introduce next numeral" is a command to copy an occurrence of '0' as 
an occurrence of '1+0'.
Specifically, in '1+0' the expression might be copied in the following stages

1
1+      this is the stage where the copy-error is introduced
1+1
1+1+
1+1+0

The reason for continuing to examine these excruciatingly "common/simple/basic" 
parts of math is because it really is the conceptual foundation for all modern 
mathematical behavior.


Elimination of numerals may be interpreted using modular arithmetic.


A Collection of Introduction and Elimination Schema

Iterative Definition Schema

The following pair of familiar expressions are given by Goodstein in RNT on pg. 
18

"
The common feature of all the definitions A to S is apparent; each definition 
takes the form

F(x,0)=a(x)
F(x,Sy)=b(x,F(x,y))

where F(x,y) is the function defined and a(x), b(x,y) are functions previously 
define for are variables or definite numerals.
" Goodstein RNT pg. 18

I've gone ahead and written them using the notation from his book, the 
classical notation.

Though his expression and discussion are consistent with his methods, they are 
not as transparent in their meaning as the following graphic presented using 
introduction and elimination rules via event schema.
The following are written using N notation.

      a y                 0 f y
    ----- introduce       ----- eliminate
    0 f y                   a y

y b x f y               S.x f y
--------- introduce   --------- eliminate
  S.x f y             y b x f y

The reason these schema are more representative of their use in defining 
functions is because they provide a sense of direction and show that the 
iterative schema of definition is conservative.
Specifically, one can not build an expression containing f without having 
established an expression from a or an expression from b and f of a certain 
form.
That this definition is "conservative" and is given by the symmetry of the 
introduction and elimination rules.
The pairing is important, and is the distinguishing feature of Goodstein's 
system.
Furthermore these schema give a guide for any notation hoping to capture 
definition by iteration for any future notation.

This iterative schema of definition is extracted from the similarity of the 
form of the defining equations for the following elementary arithmetic acts:

"
In standardized notation the definitions A, M, E, T and S take the forms

A
Sum(x,0)=x
Sum(x,Sy)=S(Sum(x,y))

M
Prod(x,0)=0
Prod(x,Sy)=Sum(x,Prod(x,y))

E
Exp(x,0)=1
Exp(x, Sy)=Prod(x,Exp(x,y))

T
Tet(x,0)=1
Tet(x,Sy)=Exp(x,Tet(x,y))

S
Dif(x,0)=x
Dif(x,Sy)=P(Dif(x,y))

where P(x) is x-.1 defined by

P(0)=0
PSx=x.
" Goodstein RNT pg.18

Goodstein then identifies what anyone would after looking at the iterative 
schema for definition long enough and wondering why b does not use y as one of 
its arguments.

"
Definition scheme I is called iteration; definition by iteration is a 
particular case of definition by RECURSION of which the scheme is

R
F(x,0)=a(x)
F(x,Sy)=b(x,y,F(x,y))

(The difference between iteration and recursion is that in the latter, function 
b depends upon y as well as upon x and F(x,y).)
" Goodstein RNT pg. 19

Using schema of a more modern form and the notational conventions of N the 
schema of recursive definition is depicted as follows:

          a y                     0 f y
        ----- introduce           ----- eliminate
        0 f y                       a y

(x,y) b x f y                   S.x f y
------------- introduce   ------------- eliminate
      S.x f y             (x,y) b x f y

Though, this is not how an experienced user of N notation would write these 
schema.
They would use the power of forks, and their abundance in everyday arithmetic 
to writhe these schema as follows:

      a y                 0 f y
    ----- introduce       ----- eliminate
    0 f y                   a y

x , b f y               S.x f y
--------- introduce   --------- eliminate
  S.x f y             x , b f y

For those unfamiliar with N's use of forks and hooks the expression 'x , b f y' 
is calculate as follows:

    x , b f y
----------------- eliminate fork
(x , y) b (x f y)

which gives the same expression as in the previous non-forked expression.
Why use forks?
Because under these schema definition and using N's tacit verb concept we can 
write:

  {0 f y} = {a y}
{S.x f y} = , b f

Or if one wanted to make expressions that are, in my opinion, more distinct and 
clear in their meaning and use of constant and projection functions:

 {0} f {y} =   a {y}
S{x} f {y} = , b f

Both expression are actually a pair of verbs/abbreviations/functions(but not 
'real' functions) themselves: they are not what a classical algebra student 
would interpret as a statement of equality.
This seems weird at first and pointless if you are not aware of how Goodstein 
uses equality.
His expressions, and his entire formal primitive recursive arithmetic, never 
explicitly uses functions: it uses symbols for what most people would call 
functions.
This subtlety seems to have been missed by anyone who has read his work.


Contents of Goodstein's Fundamental Concepts of Mathematics
Ch. 1 Numbers for Counting
Definition of Counting
Addition
Positional Notation
Commutative and Associative Properties
Recursive Definition
Mathematical Induction
Inequality
Subtraction
Multiplication
Shortcuts in Multiplication
The Distributive Property
Prime Numbers
The Infinity of Primes
Division
Quotient and Remainder
Exponentiation
Representation in a Scale
The Counterfeit Penny Problem
Tetration
The Arithmetic of Remainders
Rings and Fields
The Fundamental Theorem of Arithmetic
The Equation 1 = (a * x) - b * y
The Measuring Problem and the Explorer Problem
Groups
Isomorphisms
Cyclic Groups
Normal Subgroups
The Normalizer, the Center, and the Factor Group
Semigroups
The World Problem for Semigroups and for Groups
Congruences
Fermant's Theorem
Tests for Divisibility
Tests for Powers
Pascal's Triangle
Binomial Coefficients
Ordinal Numbers
Transfinite Ordinals
Transfinite Induction

Ch. 2 Numbers for Profit and Loss and Number for Sharing
Positive and Negative Integers
The Ring of Integers
Inequalities
Numbers for Sharing
Addition, Multiplication, Division of Fractions
Inequalities
Enumeration of Fractions
Farey Series
Index Laws
The Field of Rational Numbers
Negative Indices
Fractional Indices
The Square Root of 2
The Extension Field x + y * %: 2
Polynomials
The Remainder Theorem
Remainder Fields
Enumeration of Polynomials

Ch. 3 Numbers Unending
Decimal Fractions
Terminating and Recurring Decimals
Addition, Multiplication, Subtraction of Decimals
Irrational Decimals
Positive and Negative Decimals
Convergence
Some Important Limits
Generalized binomial Theorem
Sequence for e
The Exponential Series
Continuity
Intervals
Limit Point
Closed Sets and Open Sets
Closure
Interior Points
Denumerable Sets
Finite Sets
Infinite Sets
Sequence
Null Sequence
Continuity
Functions
Function of a Function
Inverse Functions
Integration
Increasing Functions
Integration of a Sum
Differentiation
Derivative of an Integral, Sum, Product, Quotient, and Composite Function
The Exponential and Logarithmic Functions
The Logarithmic Series
The Circular Functions
The Evaluation of pi
Pretender Numbers
Dyadic Numbers
Pretender Difference and Convergence
Pretender Limit


Notes on Basic Math by Serge Lang

Contents

Part I Algebra

Chapter 1 Numbers
The integers
Rules for addition
Rules for multiplication
Even and odd integers; divisibility
Rational numbers
Multiplicative inverses

Chapter 2 Linear Equations
Equations in two unknowns
Equations in three unknowns

Chapter 3 Real Numbers
Addition and multiplication
Real numbers: positivity
Powers and roots
Inequalities

Chapter 4 Quadratic Equations

Interlude On Logic and Mathematical Expressions
On reading books
Logic
Sets and elements
Notation

Part II Intuitive Geometry

Chapter 5 Distance and Angles
Distance
Angles
The Pythagorean theorem

Chapter 6 Isometries
Some standard mappings of the plane
Isometries
Composition of Isometries
Congruences

Chapter 7 Area and Application
Area of a disc of radius r
Circumference of a circle of radius r

Part III Coordinate Geometry

Chapter 8 Coordinates and Geometry
Coordinate systems
Distance between points
Equations of a circle
Rational points on a circle

Chapter 9 Operations on Points
Dilations and reflections
Addition, subtraction, and the parallelogram law

Chapter 10 Segments, Rays, and Lines
Segments
Rays
Lines
Ordinary equation for a line

Chapter 11 Trigonometry
Radian measure
Sine and cosine
The graphs
The tangent
Addition Formulas
Rotations

Chapter 12 Some Analytic Geometry
The straight line again
The parabola
The ellipse
The hyperbola
Rotation of hyperbolas

Part IV Miscellaneous

Chapter 13 Functions
Definition of a function
Polynomial functions
Graphs of functions
Exponential function
Logarithms

Chapter 14 Mappings
Definition
Formalism of mappings
Permutations

Chapter 15 Complex Numbers
The complex plane
Polar form

Chapter 16 Induction and Summations
Induction
Summation
Geometric Series

Chapter 17 Determinants
Matrices
Determinants of order 2
Properties of 2 by 2 determinants
Determinants of order 3
Properties of 3 by 3 determinants
Cramer's Rule

Numbers

The Integers
(Z. *. 0 <) n means n is a positive integer e.g. 1 2 3 4 5 6 7 8 9 10 11
0 = n means n is zero
N. n means n is a natural number i.e. zero or positive integer
natural number line with origin labeled 0
(Z. *. 0 >) n means n is a negative integer e.g. _1 _2 _3 _4 _5 _6 ..
Z. n means n is an integer (zero, positive integer, negative integer)
integer number line as iterated measurement from 0
addition as iterated motion on the number line
(Z. n) implies (n = n + 0) and n = 0 + n
n - ~ as (- n) +   subtraction as adding a negative
(Z. n) implies (0 = n + - n) and 0 = (- n) + n
n and - n are on opposite sides of 0 on the standard number line
read - n as "minus n" or "the additive inverse of n"

Rules For Addition
(n + m) = m + n                   commutative
((n + m) + k)=n + m + k           associative
0 = n + - n                       right inverse
0 = (- n) + n                     left inverse
n = - - n                         idempotent
(- n + m) = (- n) - m             negation distributes over addition
(*. / 0 < n) implies 0 < + / n    positive additivity
(*. / 0 > n) implies 0 > + / n    negative additivity
(n = m + k) implies m = n - k     left solvable
(n = m + k) implies k = n - m     right solvable
((n + m) = n + k) implies m = k   cancelation rule
(n = n + m) implies m = 0         unique right identity
(n = m + n) implies m = 0         unique left identity

Rules For Multiplication
(n * m) = m * n                   commutative
((n * m) * k) = n * m * k         associative
n = 1 * n                         identity
0 = 0 * n                         annihilator
(n * (m + k)) = (n * m) + n * k   left-distributive
((n + m) * k) = (n * k) + m * k   right-distributive
(- n) = _1 * n                    minus is multiplication by negative one
(- n * m)=(- n) * m               minus permutes over multiplication
(- n * m) = n * - m               minus permutes over multiplication
(n * m) = (- n) * - m
(n ^ k) = * / k #: n              exponentiation is iterated multiplication
(n ^ m + k) = (n ^ m) * n ^ k
(* / n ^ m) = n ^ + / m
(n ^ m ^ k) = n ^ m * k
(n ^ * / m) = ^ / n , m
((n + m) ^ 2) = (n ^ 2) + (2 * n * m) + m ^ 2
(*: n + m) = (*: n) + (+: n * m) + *: m
((n - m) ^ 2) = (n ^ 2) - (2 * n * m) + m ^ 2
(*: n - m) = (*: n) - (+: n * m) + *: m
((n + m) * n - m) = (n ^ 2) - m ^ 2
((n + m) * n - m) =(*: n) - *: m
n ((+ * -) = (*: [) - (*: ])) m

Even And Odd Integers; Divisibility
odd integers: 1 3 5 7 9 11 13 ..
even integers: 2 4 6 8 10 12 14 ..
'n is even' means n = 2 * m for some m with Z. m
'n is odd' means n = 1 + 2 * m for some m with Z. m
if E means even and I means odd then
 E = E + E and E = I + I
 I = E + I and I = I + E
 E = E * E and I = I * I
 E = I * E and E = E * I
 E = E ^ 2 and I = I ^ 2
 1 = _1 ^ E and _1 = _1 ^ I
n (-. |) m means "n divides m" if n = m * k for some integer k
n (-. |) n and 1 (-. |) n
"a is congruent to b modulo d" if a - b is divisible by d
if (a - b) | d and (x - y) | d then ((a + x) - b + y) | d 
if (a - b) | d and (x - y) | d then ((a * x) - b * y) | d

Rational Numbers
fractions: mrn with m , n integer numerals and -. n = 0 e.g. 0r1 _2r3 3r4 ...
dividing by zero does not give meaningful information
rational number line
(m % n) = s % t if *. / (-. 0 = n , t) , (m * t) = n * s
m = m % 1
(-. 0 = a , n) implies (m % n) = (a * m) % a * n  cancellation rule
(- m % n) = (- m) % n
(- m % n) = m % - n
(*. / (Q. r) , 0 < r) iff *. / (r = n % m) , (Z. , 0 <) n , m
"d is a common divisor of a and b" if d divides both a and b
the lowest form of a is mrn where 1 is the only common divisor of m and n
every positive rational has a lowest form
if -. n = 1 and the only common divisor of m and n is 1 then mrn = m % n
((a % d) + b % d) = (a + b) % d
((m % n) + a % b) = ((m * b) + a * n) % n * b
(0 = 0 % 1) and 0 = 0 % n
(a = 0 + a) and a = a + 0
negative rational numbers have the form _mrn
_mrn = - mrn and mrn = - _mrn
rational addition is commutative and associative
((m % n) * a % b) = (m * a) % n * b
((m % n) ^ k) = (m ^ k) % n ^ k
(Q. r) <: -. 2 = r ^ 2
a real number that is not rational is called irrational
rational * is associative, commutative, and distributes over +
(Q. r) <: (a = 1 * a) *. 0 = 0 * a
! = (* / 1 + i.) i.e. (! n) = 1 * 2 * 3 * ... * n
! = ] * (! <:) i.e. (! 1 + n) = (1 + n) * ! n
(n ! m) = (! n + m) % (! n) * ! m   binomial coefficients
(n ! m) = ((! + /) % (* / !)) n , m   multinomial coefficients
(n ! m) = m ! n
(n ! m + 1) = (n ! m) + (n - 1) ! m
decimals

Multiplicative Inverses
(*. / (Q. a) , -. a = 0) implies *. / (Q. b) , 1 = a * b
"b is a multiplicative inverse of a" if *. / 1 = a (* ~ , *) b
(b = c) if *. / (-. 0 = a) , 1 = (a * b) , (b * a) , (a * c) , c * a
(-. 0 = a) implies *. / (1 = a * % a) , 1 = (% a) * a
(-. 0 = a =: n % m) implies *. / ((% a) = m % n) , (% a) = (n % m) ^ _1
(1 = a * b) implies b = a ^ _1
(0 = a * b) implies +. / 0 = a , b
((a % b) = c % d) if *. / (-. 0 = b , d) , (a * d) = b * c
(b = c) if *. / (-. 0 = a) , (a * b) = a * c   times cancellation law
(*. / -. 0 = b , c) implies ((a * b) % a * c) = b % c  quotient cancellation law
((a % b) + c % d) = ((a * d) + b * c) % b * d
((x ^ n) - 1) % x - 1) = (x ^ n - 1) + (x ^ n - 2) + ... + x + 1
if n is odd then (((x ^ n) + 1) % x + 1) = - ` + / x ^ n - 1 + i. n

Linear Equations

Equations In Two Unknowns
assuming c = (a * x) + b * y and u = (v * x) + w * y yields 
 x = ((w * c) - u * b) % (w * a) - v * b
 y = ((v * c) - w * u) % (v * b) - w * a
elimination method: common multiples

Equations In Three Unknowns
iterate elimination method

Real Numbers

Addition And Multiplication
the real number line
addition of real numbers is commutative, associative, a = 0 + a , 0 = a + - a
(0 = a + b) implies b = - a  unique additive inverse
* is commutative,associative,distributes over +, a = 1 * a, 0 = 0 * a
((a + b) ^ 2) = (a ^ 2) + (2 * a * b) + b ^ 2
((a - b) ^ 2) = (a ^ 2) - (2 * a * b) + b ^ 2
((a + b) * a - b) = (a ^ 2) - b ^ 2
every nonzero real number has a unique multiplicative inverse
the E , I system satisfies the addition and multiplication properties

Real Numbers: Positivity
positivity as being on a side of 0 on the number line
a > 0 means "a is positive"
(*. / 0 < a , b) implies *. / 0 < (a * b) , a + b
(*. / 0 < a) implies (*. / 0 < * / , + /) a
~: / (0 = a) , (0 < a) , 0 > - a
a < 0 means -. *. / (0 = a) , (- a) > 0
"a is negative" means a<0
(a < 0) iff 0 < - a
(0 < 1) and 0 > _1
every positive integer is positive
(0 > a * b) if (0 < a) and 0 > b
(0 > a * b) if (0 > a) and 0 < b
(0 < a) implies 0 < 1 % a
(0 > a) implies 0 > 1 % a
assume completeness: (a > 0) implies *. / (0 < %: a) , a = (%: a) ^ 2
"the square root of a" means %: a
an irrational number is a real number that is not rational e.g. %: 2
Assuming *. / a = *: b , x yields
 0 = - / *: b , x
 0 = x (+ * -) b
 +. / 0 = x (+ , -) b
 +. / x = (- , ]) b
((x ^ 2) = y ^ 2) implies (x = y) or x = - y
(| x) = %: *: x  absolute value
(% (%: x + h) + %: x) = ((%: x + h) - %: x) % h  rationalize 
0 < a ^ 2
(%: a % b) = (%: a) % %: b alternatively ((%: % /) = (% / %:)) a , b
(*. / (Q. x , y , z , w) , (N. *. 0 <) n) implies (
*. / (Q. c , d) , (c + (d * %: n)) = (x + y * %: n) * z + w * %: n
(| a - b) = | b - a

Powers And Roots
assume *. / (0 < a) , (N. , 0 < ) n implies a = (n %: a) ^ n for a unique 
n %: a
"the nth-root of a" means n %: a
(a ^ 1 % n) = n %: a
(0 < a , b) implies ((n %: a) * n %: b) = n %: a * b
fractional powers: *. / (Q. x) , 0 < a implies there exists a ^ x such that
((a ^ x) = a ^ n) if x = n
((a ^ x) = n %: a) if x = 1 % n
(a ^ x + y) = (a ^ x) * a ^ y
(a ^ x * y) = (a ^ x) ^ y
((a * b) ^ x) = (a ^ x) * b ^ x
*. / (1 = a ^ 0) , 1 = * / #: 0
(a ^ - x) = 1 % a ^ x
(a ^ m % n) = (a ^ m) ^ 1 % n
(a ^ m % n) = (a ^ 1 % n) ^ m

Inequalities
a < b means 0 < b - a
a < 0 means 0 < - a
a < b means b > a
inequalities on the numberline
a <: b means a < b or a = b
a >: b means a > b or a = b
(*. / (a < b) , b < c) implies a < c
(*. / (a < b) , 0 < c) implies (a * c) < b * c
(*. / (a < b) , c < 0) implies (b * c) < a * c
x is in the open interval a , b if (a < *. b >) x
x is in the closed interval a,b if (a <: *. b >:) x
x is in a clopen interval a,b if +. / ((a < *. b >:) , (a <: *. b 
>)) x
(a <),(a <:) , (a >) , a >:  infinite intervals
intervals and the numberline
(*. / (0 < a) , (a < b) , (0 < c) , c < d) implies (a * c) < b * 
d
(*. / (a < b) , (b < 0) , (c < d) , d < 0) implies (a * c) > b * 
d
(*. / (0 < x) , x < y) implies (1 % y) < 1 % x
(*. / (0 < b) , (0 < d) , (a % b) < c % d) implies (a * d) < b * c
(a < c) implies ((a + c) < b + c) and (a - c) < b - c
(*. / (0 < a) , a < b) implies (a ^ n) < b ^ n
(*. / (0 < a) , a < b) implies (a ^ 1 % n) < b ^ 1 % n
(*. / (0 < b , d) , (a % b) < c % d) implies ((a % b) < (a + c) % b + 
d)
(*. / (0 < b , d) , (a % b) < c % d) implies ((a + c) % b + d) < c % d)
(*. / (0 < b , d , r) , (a % b) < c % d) implies (
 (a % b) < (a + r * c) % b + r * d)
(*. / (0 < b , d , r) , (a % b) < c % d) implies (
 ((a + r * c) % b + r * d) < c % d)
(*. / (0 < b , d , r) , (r < s) , (a % b) < c % d) implies (
((a + r * c) % b + r * d) < (a + s * c) % b + s * d)

Quadratic Equations
((*. / 
 (-. a = 0) , 
 (0 = (a * x ^ 2) + (b * x) + c) , 
 (0 <: (b ^ 2) - 4 * a * c)) 
implies
+. / 
 (x = (- b + %: (b ^ 2) - 4 * a * c) % 2 * a) , 
 (x = (- b - %: (b ^ 2) - 4 * a * c) % 2 * a))
(0 > (b ^ 2) - 4 * a * c) implies -. *. / (R. x) , 0 = (a * x ^ 2) + (b * x) 
+ c

On Logic And Mathematical Expressions

Logic
proof as list of statements each either assumed or derived from a deduction rule
converse: the converse of "if A, then B" is "if B, then A"
"A iff B" means "if A, then B" and "if B, then A"
proof by contradiction: take A false, derive a contradiction, conclude A true
equations are not complete sentences
logical equivalence as A iff B

Sets And Elements
set: a collection of objects
element: an object in a set
subset: s0 is a subset of s1 if every element of s0 is an element of s1
empty set: a set that does not have any elements
set equality: s0 equals s1 if s0 is a subset of s1 and s1 is a subset of s0.

Indices
"let x,y be something" includes the possibility that x=y
"let x,y be distinct somethings" excludes the possibility that x=y
x0 x1 x2 x3 .. xn is a finite sequence

Distance And Angles

Distances
assume p0 d p1 gives the distance between the points p0 , p1
assume that for any points p0,p1,p2
0 <: p0 d p1   nonnegative
(0 = p0 d p1) iff p0 = p1   nondegenerate
(p0 d p1) = p1 d p0   symmetric
(p0 d p1) <: (p0 d p2) + p2 d p1   triangle inequality
note the geometric meaning of the triangle inequality
the length of a side of a triangle is at most the sum of the others
assume that two distinct points lie on one and only one line
 (-. p0 = p1) implies *. / (p0 p1 i p0 , p1),
 (*. / p2 p3 i p0 , p1) implies p2 p3 i = p0 p1 i
define betweenness as equality case of the triangle inequality
 (p0 p1 B p2) iff (p0 d p1) = (p0 d p2) + p1 d p2
define segment as the points between a pair of endpoints
 (p0 p1 W p2) iff p0 p1 B p2  (by definition of B we have p0 p1 i p2)
assume the length of a segment is the distance between its endpoints
 (mW p0 p1) = p0 d p1
assume rulers pick out unique points
 (*./(0<:a),a<:p0 d p1) implies *./(p0 p1 W p2),a=p0 d p2 for some p2
 ((*./(p0 p1 W),(= p0 d))p2,p3) implies p2=p3
define circle as the points equidistant from a common point
 (p0 p1 o p2) if (p0 d p1)=p0 d p2  geometric circle from metric
define (p0 r bdB) as the circle with center p0 and radius r
 (p0 r bdB p1) if r=p0 d p1  metric circle as boundary of a ball
prove two points uniquely define a circles
 (p0 p1 o p2) implies (p0 p1 o = p0 p2)
prove a point and radius uniquely define a circle
 (p0 r bdB p1) implies (p0 r bdB p2) iff p0 p1 o p2
define (p0 r clB p1) as the disc with center p0 and radius r
 (p0 r clB p1) if r>:p0 d p1

Angles
assume distinct points lie on a unique line
 (-.p0=p1) implies *./(p0 p1 i p0,p1),
 (*./p2 p3 i p0,p1) implies (p2 p3 i = p0 p1 i)
assume a pair of nonparallel lines share a unique point
 (-.p0 p1 p2 H p3) implies (p0 p1 i *. p2 p3 i)p4 for some p4
 (*./(-.p0 p1 p2 H p3),(p0 p1 i *. p2 p3 i)p4,p5) implies p4=p5
assume a point belongs to a unique parallel to a line
 p0 p1 p2 H p2
 (*./(p0 p1 p2 H p3),p2 p3 i p4) implies p0 p1 p2 H p4
 (*./p0 p1 p2 H p3,p4) implies (p2 p3 i = p2 p4 i)
assume "parallel to" is an equivalence relation
 p0 p1 p0 H p1
 (p0 p1 p2 H p3) implies p2 p3 p0 H p1
 (*./(p0 p1 p2 H p3),p0 p1 p4 H p5) implies p2 p3 p4 H p5
assume a point belongs to a unique perpendicular to a line
 (*./(p0 p1 p2 L p3),p2 p3 i p4) implies p0 p1 p2 L p4
 (*./p0 p1 p2 L p3,p4) implies (p2 p3 i = p2 p4 i)
assume a parallel to a perpendicular is perpendicular
 (*./(p0 p1 p2 L p3),p2 p3 p4 H p5) implies p0 p1 p4 L p5
assume a perpendicular to a perpendicular is parallel
 (*./(p0 p1 p2 L p3),p2 p3 p4 L p5) implies p0 p1 p4 H p5
define a halfline as points on the same side of a line relative to a vertex
 (p0 p1 R p2) if (p2 B p0 p1)+.p1 B p0 p2
assume a halfline is determined by its vertex and any other point on it
 ((p0 p1 R p2)*.-.p0=p2) implies p0 p1 R = p0 p2 R
define (p0 p1 R) as the halfline with vertex p0 to which p1 is incident
assume a pair of distinct points determine two distinct rays
 (-.p0=p1)<:p0 p1 R (-.=) p1 p0 R
assume a point on a line divides it into two distinct halflines
 (p0 p1 i p2)<: (p0 p1 R p2)+.(p0 p1 i p3) implies (p0 p1 R p3)+.p0 p2 R p3
assume two distinct halflines sharing a vertex separate the plane into two parts
define angle as one of the parts of the plane separated by such halflines
assume two points on a circle divide it into two distinct arcs
note Lang uses counterclockwise oriented angles rather than neutral angles
assume p0 p1 p2 c is the counterclockwise arc of (p1 p0 o) from p0 to (p1 p2 R)
define (p0 p1 p2 V) as the angle from p1 p0 R to p1 p2 R containing p0 p1 p2 c
define the vertex of (p0 p1 p2 V) as p1
define (p0 p1 p2 V) is a zero angle as (p1 p0 R = p1 p2 R)
define (p0 p1 p2 V) is a full angle as (p2 p1 p0 V) is a zero angle
note special notation to distinguish a full angle from a zero angle
define (p0 p1 p2 V) is a straight angle as (p0 p1 i p2)
prove if (p0 p1 p2 V) is a straight angle then so is (p2 p1 p0V)
define (p0 p1 p2 r clBV) as the sector of (p1 r clB) determined by (p0 p1 p2 V)
 (p0 p1 p2 r clBV p3) if (p1 r clB p3)*.(p0 p1 p2 V p3)
define mclB p0 r as the measure of the area of (p0 r clB)
define mclBV p0 p1 p2 r as the the measure of the area of (p0 p1 p2 r clBV)
define (mV p0 p1 p2) using the ratio (mclBV p0 p1 p2 r) to mclB p1 r
 (mV p0 p1 p2)=x deg if *./(0<:x),(x<:360),((mclBV p0 p1 p2 r)%mclB p0 
r)=x%360
define "x deg" as "x degrees"
prove the measure of a full angle is 360 deg
 (p0 p1 R p2) implies (360 deg)= mV p2 p1 p0
prove the measure of a zero angle is 0 deg
prove the measure of a straight angle is 180 deg
define a right angle as one whose measure is half a straight angle i.e. 90 deg
 (p0 p1 p2 V) is right iff 90=mV p0 p1 p2
assume the area of a disc of radius r is pi*r^2 where pi is near 3.14159
prove that the measure of an angle is independent of r

Pythagorean Theorem
define p W p0 as +. / 2 (p0 W ~) \ p
define noncolinear points p0,p1,p2 as -. p0 p1 i p2
define triangle as segments between three points
 (p0 p1 p2 A p3) if p0 p1 p2 p0 W p3
define the triangle with vertices p0 , p1 , p2 as (p0 p1 p2 A)
define the sides of (p0 p1 p2 A) as (p0 p1 W), (p1 p2 W), and (p2 p0 W)
define triangular region as the points bounded by and having a triangle
define area of a triangle as area of a triangular region
define mA p0 p1 p2 as the measure of the area of (p0 p1 p2 A)
note triangular regions are also called simplexes
note pairs of sides of a triangle determine angles
define a right triangle as one having a right angle
 (p0 p1 p1 p2 Z p3) if *./ (p0 p1 p2 A p3) , 90 = mV p1 p2 p0
define the legs of a right triangle as the sides of its right angle
define the hypotenuse of a right triangle as the non-leg side
assume right triangles with corresponding legs of equal length are congruent
 (*./(p0 p1 p2 Z),(p3 p4 p5 Z),((p1 d p2)=p4 d p5),(p2 d p0)=p5 d p3) implies
 *./((mV p0 p1 p2)=mV p3 p4 p5),((mV p1 p0 p2)=mV p4 p3 p5),
 ((p0 d p1)=p3 d p4),(mA p0 p1 p2)=mA p3 p4 p5
assume parallels perpendicular to parallels cut corresponding segments equally
 (*./(p0 p1 p2 H p3),(p0 p1 p0 L p2),p0 p1 p1 L p3) implies 
 *./((p0 d p1)=p2 d p3),(p1 d p2)= p3 d p0
define (0=mH p0 p1 p2 p3) if -.(p0 p1 p2 H p3)
define ((p0 d p1)=mH p2 p0 p3 p1) if p2 p0 p3 H p1
prove the distance between parallel lines is unique
(*./(p0 p1 p2 H p3, p4)(p2 p3 p3 L p5)(p0 p1 i p5,p6)p2 p4 p4 L p6)<:(p3 d 
p5)=p4 d p6
define rectangle as four sides: opposites parallel and adjacents perpendicular
 (p0 p1 p2 p3 Z p4) if 
 *. / (p0 p1 p2 H p3) , (p1 p2 p3 H p0) ,
 (p0 p1 p1 L p2) , (p1 p2 p2 L p3) , (p2 p3 p3 L p0) , (p3 p0 p0 L p1) ,
 p0 p1 p2 p3 p0 W p4
define (p0 p1 p2 p3 Z) as a rectangle with vertices p0 p1 p2 p3
prove the opposite sides of a rectangle have the same length
note area of a rectangle means area of region bounded and containing a rectangle
define (mZ p0 p1 p2 p3) as area of (p0 p1 p2 p3 Z)
define a square as a rectangle all of whose sides have the same length
prove the area of a square with side length a is a ^ 2
prove that (p0 p0 p1 p2 Z) uniquely determines (p3 p0 p1 p2 Z)
prove the sum of the non-right angles in a right triangle is 90 deg
 (p0 p0 p1 p2 Z) implies 90 = (mV p1 p0 p2) + mV p1 p2 p0
prove the sum of the angles in a right triangle is 180 deg
 (p0 p0 p1 p2 Z) implies 180 = (mV p0 p1 p2) + (mV p1 p2 p0) + mV p2 p0 p1
prove the area of a right triangle with leg lengths a,b is -: a * b
prove the Pythagorean theorem
 (p0 p1 p1 L p2) implies (*: p0 d p2) = + / *: (p0 d p1) , (p1 d p2)
prove a triangle is right iff it satisfies the pythagorean theorem
define the diagonals of (p0 p1 p2 p3 Z) as (p0 p2 W) and p1 p3 W
prove the lengths of the diagonals of a rectangle (and square) are the same
prove the length of the diagonal of a square with side length 1 is %: 2
prove a right triangle with legs of length 3,4 has hypotenuse of length 5
define perpendicular bisector as line perpendicular to segment through midpoint
 (p0 p1 t p2) if ((-: p0 d p1) = p0 d p3) implies +. / (p2 = p3) , p0 p3 p3 L p2
prove (p0 p1 t p2) iff (p0 d p2) = p1 d p2
prove the *: of the diagonal of a rectangular solid is + / *: of its sides
prove the area of a triangle with base length b and height h is -: b * h
prove the hypotenuse of a right triangle is greater than or equal to a leg
prove (*. / (p0 p1 p2 L p3) , (p0 p1 i p3 , p4)) implies (p2 d p3) <: p2 d p4
prove opposite interior angles are the same
prove corresponding angles are the same
prove opposite angles are the same
prove the perpendicular bisectors of the sides of a triangle meet at a point

Isometries

Some Standard Mappings Of The Plane
define p0 is mapped to p1 as (p0 ; p1)
note map is similar in meaning to association,function,verb,arrow
define map of the plane as associating each point of the plane with another
define the value of M0 at p0 or the image of p0 under M0 as (M0 p0)
define M0 maps p0 onto p1 as p1 = M0 p2
define (M0 = M1) as (M0 p0) = M1 p0 for all p0
define the p0 constant map as (p0 Mp)
 p0 = p0 Mp p1
note (p0 [) is the constant map
 p0 = (p0 [ p1)
define the identity map as ]
 p0 = ] p0
note ] is the identity map
 p0 = ] p0
define the reflection map about (p0 p1 i) as (p0 p1 Mt)
 p0 = p1 p2 Mt p3 if (p1 p2 i p4) iff p0 p3 t p4
define the reflection map about p0 as Mm
 (p0 = p1 Mm p2) if p0 p2 m p1
define the dilation about p0 of p1 to p2 as (p0 p1 p2 MH)
 (p0 = p1 p2 p3 MH p4) if
 (*. / (p3 p1 p1 L p5,p6)(p1 p2 o p5)(p1 p3 o p6))<:(p3 p5 p6 H p4)*.p0 p3 i 
p4
define dilation by r0 about p0 as (p0 r0 IH)
 (p0 = p1 r0 IH p2) if (p1 d p2)=r*p1 d p0
define the counterclockwise rotation about p1 by (p0 p1 p2 V) as (p0 p1 p2 Mo)
 (p0 = p1 p2 p3 Mo p4) if 
 (*./(p2 p4 o p5)(p2 p1 i p5)(p2 p3 i p6)(p2 p6 p6 L p1)p5 p6 Ed p4 p7)<:
 (p2 p4 o p0)*.p2 p7 i p0
note the rotation map defined assumes acute angles
define the counterclockwise rotation about p0 by r0 degrees as (p0 r0 Io)
 (p0 = p1 r0 Io p2) if *./(0<:r0)(r0<:360)r0=mV p2 p1 p0
note 0<:r0 implies (p0 r0 Io) is c.c. and r0<:0 implies (p0 r0 Io) is 
clockwise
prove p0 180 Io = p0 Mm
prove p0 180 Io = p0 _180 Io
prove (p0 0 Io = ])
prove (p0 360 Io = ])
note rotation by 0 or 360 degrees is the identity transformation
define (p0 r0 oV) as (p0 r1 oV) with *./(0<:r1),(r1<360),r0=r1+360*n for 
some n
prove rotation by a negative angle is rotation by a positive angle
define the arrow from p0 to p1 as a0 =: p0 ; p1
 ((p0 S a0) *. p0 T a0) if a0 = p0 ; p1
define p0 is an object of a0 if p0 S a0 or p0 T a0
 (p0 O a0) if (p0 S a0) +. p0 T a0
note, in general, a0;a1 is an arrow with objects a0,a1, source a0 and target a1
 *. / ((a0 , a1) O a0 ; a1) , (a0 S a0 ; a1) , a1 T a0 ; a1
define p0 p1 W as the directed line segment associated with the arrow p0;p1
 (p0 p1 W = p1 p0 W) iff p0 = p1
define translation by (p0 p1 W) as (p0 p1 MW)
 (p0 = p1 p2 MW p3) if
 ((p1 p3 p3 L p0) *. p1 p3 p2 H p0) +.
 *. / ((p1 p2 i p3)(-.p1 p2 i p4)(p1 p4 p2 H p5)p4 p5 p1 H p2)<:p0=p4 p5 MW 
p3
define p0 is a fixed point of M0 if p0 = M p0
prove that every point is a fixed point of ]
prove that p0 is the only fixed point of p0 Mp
prove p0 is the only fixed point of p0 Mm
prove p0 is a fixed point of (p1 p2 Mt) iff (p1 p2 i p0)
prove (-. 0 = mV p0 p1 p2) implies p1 is the only fixed point of p0 p1 p2 MV
prove (-. 0 = r0) implies p0 is the only fixed point of p0 r0 IV
prove that (-. p0 = p1) implies (p0 p1 MW) has no fixed points
prove if -. 1 = r0 implies p0 is the only fixed point of p0 r0 IH
prove every point is a fixed point of p0 1 IH

Isometries
define M0 is an isometry if it preserves distance: (d=d I0)
 (p0 d p1) = (I0 p0) d I0 p1
prove isometries map distinct points to distinct points
 (-. p0 = p1) implies -. (M0 p0) = M0 p1
define y is in the image of A under M0 if y = M0 x for some x in A
assume point and line reflects, rotations, and translations are isometries
prove isometries of points are points
prove isometries of line segments are line segments
prove isometries of lines are lines
prove isometries of circles are circles
prove isometries of discs are discs
prove isometries of circular arcs are circular arcs
prove if -. p0 = p1 fixed points of an isometry then so are points on p0 p1 i
prove an isometry wit three fixed points is the identity
prove (p0 1 IH) and (p0 _1 IH) are isometries (the only of the family IH)
prove isometries of parallel lines are parallel
prove isometries of perpendiculars are perpendicular
note isometries in 3 space

Composition of isometries
define the composition of M0 with M1, M1 followed by M0, as (M1 M0)
 (p0 = (M0 M1) p1) if (p2 = M1 p1) implies p0 = M0 p2
prove if M0 is an isometry then M0 = (] M0) and M0 = (M0 ])
prove the composition of two (p0 180 Io) is ]
prove the composition of isometries is an isometry
prove the composition of rotations about a point is a rotation about that point
 p0 (r0 + r1) Io = (p0 r1 Io p0 r0 Io)
prove that the composition of translations is a translation
 p0 p2 MW = (p1 p2 MW p0 p1 MW)
prove the composition of dilations about a point is a dilation about that point
 p0 (r0 * r1) IH = (p0 r1 IH p0 r0 IH)
prove the composition of isometries is associative (arrows in general)
define (M0 ^: 2) as (M0 M0)
define (M0 ^: 3) as (M0 M0 M0)
define (M0 ^: 1 + n) as (M0 M0^:n)
define (M0 ^: 0) as ] and (M0^:1) as M0
prove MI = (p0 Mm) ^: 2
prove MI = (p0 Mm) ^: 2 * n
prove (p0 Mm) = (p0 Mm) ^: 1 + 2 * n
prove (M0 ^: n0 + n1) = (M0 ^: n0 M0 ^: n1)
prove if M0 is a reflection through a line then (M0 ^: 2) is MI
note not all isometries commute

Inverse Isometries
define M0 as the inverse of M1 if (] = (M0 M1)) and (] = (M1 M0))
prove the inverse of a map is unique if it has one
define (M0 ^: _1) as the inverse of M0 if it has one
note (y = M0 x) iff (x = (M0 ^: _1)y) or ([ = (M0 ])) = (] = ((M0 ^: _1) [))
prove reflections are their own inverses
prove identity is its own inverse
prove ] = (p0 p1 MW p1 p0 MW) and ] = (p1 p0 MW p0 p1 MW)
prove (p0 p1 MW) and (p1 p0 MW) are inverses of each other
prove ] = (p0 r0 Io p0 -r0 Io) and ] = (p0 -r0 Io p0 r0 Io)
prove (p0 r0 Io) and (p0 -r0 Io) are inverses of each other
 (p0 -r0 Io) = (p0 r0 Io) ^: _1
prove ((M0 M1) ^: _1) = (M1 ^: _1 M0 ^: _1)
define M0 ^: _n0 as (M0 ^: _1) ^: n0
prove (M0 ^: n0 + n1) = (M0 ^: n0) M0 ^: n1
prove if M0,M1 are isometries *./(M0=M1)p0,p1,p2 then (M0=M1) if M0^:_1 exists
prove every isometry actually does have an inverse
prove reflections about perpendicular lines commute
prove M0 , M1 , M2 isometries (M0 M1) = (M0 M2) implies M1 = M2
note symmetries of the square via isometries
note symmetries of the triangle via isometries
note symmetries of the hexagon via isometries
note do these isometric symmetries characterize these shapes?

Characterization Of Isometries
prove -. p0 = p1 fixed points of isometry M0 implies +. / (MI = M0) , p0 p1 MT 
= M0
prove an isometry with only one fixed point is +. / Mo , Mo MT
prove an isometry without a fixed point is +. / MW , (MW Mo) , ((MW Mo) Mm)

Congruences
define p00,p01,..,p0n is congruent to p10,p11,..,p1m if p00,..,p0n=M0 p11,..,p0m
note if one set is the image of another under an isometry then they're congruent
prove circles with the same radius are congruent
prove discs with the same radius are congruent
prove segments with the same length are congruent
prove right triangles whose corresponding legs are congruent are congruent
prove triangles whose corresponding sides are congruent are congruent
prove squares whose sides are congruent are congruent
prove rectangles whose corresponding sides are congruent are congruent
assume the area of a region is equal to the area of its image under an isometry
prove congruence is an equivalence relation
prove any two lines are congruent
prove the sides of a triangle with angle measures 60 deg have equal length
define equilateral triangle if its sides are all the same length
prove SAS characterization of congruence
prove AAS characterization of congruence
prove inscribed circle in a triangle angle bisectors

Area And Applications

Area Of A Disc Of Radius r
note a unit length determines a unit area
assume area of a square with side length a is a^2
assume area of a rectangle with side lengths a,b is a*b
prove the area of the dilation by r of a square of area a is a*r^2
assume the area of the dilation by r of a region with area a is a*r^2
define o.1 as the length of of a circle with radius 1
prove the area of the dilation by r of a disc of radius 1 is o.-:r^2
note approximate regions with squares to find their area
note upper/lower bounds as areas inside and outside of figure
define ellipse as nonuniform scaling of a disc
prove map circle to ellipse algebraically
note scaling and volume in 3-space is similar

Circumference Of A Circle Of Radius r
assume ((o. 1) = mbdB p0 1) and (o. r) = mbdB p0 r
note approximate by dividing disc into n sectors with angles 360%n
note disc area to circle length
prove the length of the dilation by r of a segment of length a is r*a
assume the length of the dilation by r of an arbitrary curve of length a is r*a

Coordinates And Geometry

Coordinate Systems
define an origin as the intersection of perpendicular lines (called axis)
note the classical origin is the intersection of a horizontal and vertical line
note pick unit length, cut axes into segments left/right up/down
note cut plane into squares with unit side lengths
note label each point of intersection with a pair of integers
note intersection of perpendicular lines to axes through a point gives its 
coordinate
define the coordinate of the origin as 0,0
note meaning of the positive/negative components as motions
define x-coordinate is usually the first, y-coordinate is usually the second
prove the axes divide the plane into four quadrants
define the positive side of the second axis as counterclockwise the first
note plot points
assume/prove every point corresponds to a unique pair of numbers
assume/prove every pair of numbers corresponds to a unique point
note points in 3-space

Distance Between Points
points on the number line are labeled so that algebraic definitions work simply
note the distance between points in the plane is found using the pythagorean 
theorem
prove the distance between points p0 and p1 on a number line is %:(p0-p1)^2
 (*./(p0=a0,b0),p1=a1,b1) implies (p0 d p1)=%:@+/@*:(a1-a0),(b1-b0)
assume distance as d=:%:@+/@*:- satisfies the required geometric properties
define the plane as all pairs of real numbers with distance %:@+/@*:-
prove (0 = p0 d p1) iff p0 = p1
define dilation as * i.e. (r * x , y) = (r * x) , r * y
prove (0 <: r) implies (d r * x , y) = r * d x , y 
prove ((r * [) d r * ]) = r * d
prove distance works in 3-space

Equation Of A Circle
assume (p0 p1 o p2) iff (p0 d p1) = p0 d p2
assume p0 r0 bdB p1 if r0 = p0 d p1
define p0 r0 bdB as the circle centered at p0 with radius r0
prove ((p0=:r0,r1) r2 bdB p1=:r3,r4) iff (*:r0)=+/*:p0-p1
prove is the equation of a circle in r3,r4 with center r1,r2 and radius r0 is
 (*: r0) = + / *: (r1 , r2) - r3 , r4
prove (p0 r0 bdB p1) iff (*: r0) = + / *: p0 - p1

Rational Points On A Circle
prove ((*:c)=+/*:a,b) iff (1=+/*:(a,b)%c) iff 1=+/*:(x=:a%c),(y=:b%c) when -.c=0
note to solve (*:c)=+/*:a," for integers a,b,c solve 1=+/*:x,y for rationals x,y
define a rational point as one whose components are rational numbers
prove (*./(t=:y%1+x),(1=+/*:x,y),-._1=x) <: *./x=((1- % 1+)*:t),y=(2* 
%(1+*:))t
prove 1=+/*:x,y rational <: *./x=(1- % 1+)*:t),y=((2*)%(1+*:))t for rational 
t
prove distinct rationals give distinct solutions
 (*./(0<:s),s<t) implies </((1-)%(1+))*:s,t

Operations On Points

Dilations And Reflections
assume (r0*r1,r2)=(r0*r1),r0*r2
prove (p0= p1 r0 IH p2) iff (p0=p1+r0*p2-p1) or (p0=(r0*p2)+(1-r0)*p1)
prove (p0= p1 Mm p2) iff (p0=p1-p2-p1) or (p0=(+:p1)-p2)
prove ((r0*r1)d r0*r2)=(|r0)* r1 d r2
note the n-dimensional case

Addition Subtraction And The Parallelogram Law
assume ((a0,a1)+b0,b1)=(a0+b0),a1+b1
prove commutativity (p0+p1)=p1+p0
prove associativity: (p0+p1+p2)=(p0+p1)+p2
prove 0,0 is an additive identity: (p0=p0+0,0) and p0=(0,0)+p0
prove additive inverses exist: ((0,0)=p0+-p0) and (0,0)=(-p0)+p0
prove the points (0,0);p0;p1;p0+p1 are vertices of a parallelogram
 (0,0),p0,p1,:p0+p1 W is a parallelogram
prove p0=(p0-p1)+p1
prove (0,0);p0;p1;p0-p1 are vertices of a parallelogram
prove (p0=p1 p2 MW p3) iff (p0=p1+(p2-p1)+p3-p1) or p0=p3+p2-p1
define norm p0 as (0,0) d p0
 norm =:(0,0) d
prove (p0 d p1)=norm p0-p1
prove (p0 d p1)=norm p1-p0
prove M0 is an isometry iff (norm p0-p1)=norm (M0 p0)-M0 p1
prove (p0 r0 bdB p1) iff (p1=(0,0) p0 MW p2) for some p2 with r0=norm p1 p2
prove every circle is the translation of a circle about the origin
 (p0 r0 bdB p1) iff (p1=(0,0) p0 MW p2) for some p2 with (0,0) r0 bdB p2
prove associativity: (r0*r1*p0)=(r0*r1)*p0
prove distributivity: (r0*p0+p1)=(r0*p0)+r0*p0
prove identity: p0=1*p0
prove annihilator: (0,0)=0*p0
prove translation is an isometry
 (p0 d p1)=(p2 p3 MW p0) d p2 p3 MW p1
prove a reflection through the origin followed by a translation is a 
point-reflection
 (p0 p1 MW (0,0) Mm)= p2 Mm for some p2
prove a dilation through the origin followed by a translation is a 
point-dilation
 (p0 p1 MW (0,0) r0 MH)= p2 r1  MH for some p2 and r1
prove the reflection of a circle through a point is a circle
for some p4,p5 (*./(p0=p1 Mm p2),p3 p4 o p2) iff (p4 p5 o p0)
prove the dilation of a circle through a point is a circle
prove ((]=(M0 p0 p1 MW) and ]=p0 p1 MW M0) iff (M0 p2)=p0+(p0-p1)+p2-p0
prove the inverse of a translation is a translation
prove ((]=M0 p0 r0 IH) and ]=p0 r IH M0) iff (M0 p1)=p0+(%r)*p1-p0
prove the inverse of a dilation is a dilation
prove (p0 = p1 p2 MW p0) iff (p0=p0+p2-p1) iff ((0,0)=p2-p1) iff p1=p2
prove translation doesn't have fixed points unless it is the identity
prove the fixed points of a transformation via its coordinate definition
prove (*./(p0=a0,a1),(e0=1,0),e1=0,1) implies p0=(a0*e0)+a1*e1
prove p0,(p0+r*e0),(p0+r*e1),:(p0+(r*e0)+r*e1) W is a rectangle

Segments, Rays, And Lines

Segments
prove (p0 p1 W p2) iff *./(p2=p0+(p1-p0)*t),(0<:t),t<:1
prove the point halfway between p0 and p0+p1 is p0+-:p1
prove every segment is a translation of a segment from the origin
prove every segment is a translation of a dilation of a unit segment from the 
origin
prove (p0 p1 W p2) iff *./(p2=((1-t)*p0)+t*p1),(0<:t),t<:1
assume (p0 p1 W) is a directed segment ordered by ((1-t)*p0)+t*p1 with 0<:t 
and t<:1
note p0 p1 W is also called a located vector
define the source of p0 p1 W as p0
define the target of p0 p1 W as p1
note p0 p1 W is also said to be located at p0
prove (p0 p1 MW = p1 p0 MW) iff p0=p1
note a point can be represented as an arrow whose source and target are equal

Rays
define the ray with vertex p0 in the direction of (0,0) p1 W as p0 (p0 + p1) R
prove p0 p1 R p2 iff *. / (p2 = p0 + t * p1 - p0) , (R. *. 0 <:) t for some t
prove p0 p1 R = p0 (p1 - p0) R
prove (R. *. 0 <)t implies p0 p1 R = p0 (t * p1) R
define p0 p1 R has the same direction as p2 p3 R if 
 *. / ((R. *. 0 <:) t) , (p1 - p0) = t * p3 - p2 
note this induces a sensed parallel axiom
note multidimensional forms

Lines
define p0 p1 W is parallel to p2 p3 W if *. / (R. t) , (p1 - p0) = r * p3 - p2
prove parallelism in this way is an equivalence relation
define p0 parallel to p1 if *. / (-. 0 = p0 , p1) , (R. t) , p0 = t * p1 for 
some t
prove a located vector belongs to a unique line
 p0 p1 W p2 implies p0 p1 i p2
prove (-.p0=0,0) implies ((0,0),:p0 i p1) iff p1=t*p0 for some t
note the line passing through p0 parallel to (0,0) p1 W is all points p0+t*p1 
for some t
prove p0 p1 i p2 iff p2=p0+t*p1 for some t
note p0+t*p1 is called a parametric representation of the line i p0 (p0+p1)
note in N the parametric representation is actually p0 + p1 *
note t is called a parameter in p0+t*p1
note the following argument in N
 p0 =: a0 , a1   p0 is the ordered pair a0,a1
 p1 =: b0 , b1   p1 is the ordered pair b0,b1
 p =: p0 + p1 *   parametric description of the line through p0 parallel to p1
 x =: 0 { p   zeroth coordinate of p
 y =: 1 { p   first coordinate of p
 p = (x , y)
 x = a0 + b0 *
 y = a1 + b1 *
 xaxis =: 0 , ~
 p = xaxis x  suppose p is equal to a point on the xaxis
 (x , y) = 0 , ~ x p = (x , y) and (x , 0) = xaxis x
 (x = x) *. 0 = y   pairs are equal iff their components are
 x = x   this is always true, so we don't get any new information
 0 = y   thus (p=xaxis x) iff (0=y)
 (0 = y) t   does there exist t such that 1=((0=y)t) ?
 (0 = a1 + b1 *) t
 (0 t) = (a1 + b1 *) t
 0 = a1 + b1 * t
 t =: b1 % ~ s
 0 = a1 + b1 * b1 % ~ s
 0 = (a1 +) ] s   by algebra 1=]*(%]) or (-.0=[)<: ]=[ * ] % [
 0 = a1 + s
 s =: (- a1) + u
 0 = a1 + (- a1) + u
 0 = ] u
 0 = u
 t = b1 % ~ (- a1) + 0
 t = b1 % ~ (- a1)
 t = (- a1) % b1
 t = - a1 % b1
   p - a1 % b1   yields a point on the x-axis, it is unique (by other arguments)
note mW O p0 can be used to represent the magnitude of a velocity (speed)
note when do two parametrically described lines intersect?
prove when a line crosses a circle
for what x and y does (p=(x,y))*.(*:r)=(+/(*:x,y))
prove if *./-.O=A,B  then A=:a0,a1 is parallel to B=:b0,b1 iff 0=(a0*b1)-a1*b0
prove if two lines are not parallel then they have exactly one point in common
prove if P=:p,q and (*:r)>:+/*:P then P+A* intersects (*:r)=(+/(*:(0 1{))) 
twice
prove if A=:a0,a1 and B=:b0,b1 then (x,y)=(A +)(B *) iff x=a0 + b0 * and y=a1 + 
b1 *

Ordinary Equation For A Line
prove (x , y) = ((a0 , a1) +) ((b0 , b1) *) then
 x = a0 + b0 *
 y = a1 + b1 *
 ]
 (b % ~) (b *)
 ((b % ~) ]) (b *)
 ((b % ~) (a - ~ a +)) (b *)
 (b % ~) ((a - ~ a +) (b *))
 (b % ~) (a - ~ ((a +) (b *)))
 (b % ~) (a - ~) x
 NB. alternatively (and going along the classical route)
 (a0 , a1) + (b0 , b1) * t
 (a0 , a1) + (b0 * t) , (b1 * t)
 (x =: a0 + b0 * t) , (y =: a1 + b1 * t)
 t
 t * 1
 t * (b0 % b0)
 (t * b0) % b0
 (b0 * t) % b0
 (0 + b0 * t) % b0
 ((- a0) + a0 + b0 * t) % b0
 ((- a0) + x) % b0
 (x - a0) % b0
 t = (x - a0) % b0
 t = (y - a1) % b1  NB. By a similar argument.
prove the ordinary tacit form has x,y on the right
 (x , y) = (A +) (B *) 
 ]
 (B % ~) (B *)
 (B % ~ A - ~ A + B *)
 (B % ~ A - ~) (x , y)
 ] = (b0 % ~ a0 - ~) x
 ] = (b1 % ~ a1 - ~) y
 ((b0 % ~ a0 - ~) x) = ((b1 % ~ a1 - ~) y)
 y = (a1 + b1 * b0 % ~ a0 - ~) x

Trigonometry

Radian Measure
define x=mV p0 p1 p2 if *./(0<:x),(x<:o.1),(x%o.1)=(mclBV p1 1 p0 
p2)%(mclB p1 1)
prove if x=mV p0 p1 p2 then (mclB p1 1)=o.1r2 implies x=mclBV p1 1 p0 p2
prove (deg x)=((o.1)%180)*(rad x)
note from now on: radians only
prove (x%o.1)=(mbdBV p0 1 p1 p2)%(mbdB p0 1)
if x>:o.2 then "x rad" means "w rad" with *./(0<:w),(w<o.2),(x=w+n*o.2)
if *./(0<z),(x=-z) then (rad x) means "w rad" with 
*./(0<:w),(w<o.2),(z=(n*o.2)-w)

Sine And Cosine
if *. / (O p2 K p3) , (-. p3 = O) , (p3 = (a , b)) then "sine V p3 O (1,0)" is 
b % r =: %: + / *: a , b
"cosine V p3 O (1,0)" is a%r
sine and cosine are independent of the point p3 (prove)
geometrically this means that any two such triangles are similar
if O 1 bdB p3=:a,b then (sine V p3 O (1,0))=b and (cosine V p3 O (1,0))=a
for O 1 bdB p3=:(a,b) define (sine mV p3 O (1,0))=b and (cosine mV p3 O (1,0))=a
the sign of sine and cosine depending on the quadrant its relevant angle 
occupies
Q1:+,+ Q2:-,+ Q3:-,- Q4:+,-
if (LA p0 p1 p2) then (sin V p1 p0 p2)=(d p1 p2)%(d p0 p1)
if (LA p0 p1 p2) then (cos V p1 p0 p2)=(d p0 p2)%(d p0 p1)
"sin x" is (sine rad x)
"cos x" is (cosine rad x)
from the definition of rad (for an arbitrary angle) (sin x)=sin x+n*o.2
(cos x) = cos x + n * o. 1
using plane geometry and the Pythagorean theorem:
=======================
x      sin x    cos x
-----------------------
o.1r6  1r2      (%:3)%2
o.1r41 %%:21    %%:2
o.1r3  (%:3)%2  1r2
o.1r2  1        0
o.1    0        _1
o.2    0        1
=======================
consider 1,1,%:2 and 1,(%:3),2 right triangles (and their angles)
reflect o.1r6, o.1r3, o.1 over longest leg and compute
if 1=$x then 1=+/*:(sin,cos)x since
 1
 (*: r) % *: r
 ((*: a) + *: b) % *: r
 ((*: a)% *: r) + (*: b) % *: r
 (*: a % r) + *: b % r
 + / *: ((a % r) , b % r)
 + / *: (sin x) , cos x
 + / *: (sin , cos) x
(cos x) = sin x + o. 1r2 and (sin x) = cos x - o. 1r2
(sin - x) = - sin x and (cos x) = cos - x
determine a distance using small angle measurements and a known length
polar coordinates
 r = %: + / *: x , y
 V =: mV (x , y) O (1 , 0)
 (x % r) = cos V
 (y % r) = sin V

The Graphs
plot ] , sin

The Tangent
tan =: sin % cos
tan only gives relevant information when -.0=cos
if *. / (O p2 K p3) , (-. p3 = O) , (p3 = a , b) then (b % a) = tan mV p3 O p2
tangent of the angle made by a line crossing the x-axis is the lines slope
 plot ],tan
we only plot tables of values
cot=: % tan 
sec=: % cos 
cosec =: % sin
1 = - / *: (tan , sec) x
1 = - / *: (csc , cot) x

Addition Formulas
(sin x + y) = ((sin x) * cos y) + (cos x) * sin y
(cos x + y) = ((cos x) * sin y) - (sin x) * sin y
(sin x - y) = ((sin x) * cos y) - (cos x) * sin y
(cos x - y) = ((cos x) * sin y) + (sin x) * sin y
(sin +: x) = +: * / (sin , cos) x
(cos +: x) = - / *: (cos , sin) x
(*: cos x) = (1 + cos +: x) % 2 or (+: *: cos x) = 1 + cos +: x
(*: sin x) = (1 - cos +: x) % 2 or (+: *: sin x) = 1 - cos +: x
(* / sin (m , n) * x) = -: - / cos (m (- , +) n) * x
(((sin m *) * (cos n *)) x) = -: + / sin (m (+ , -) n) * x
(* / cos (m , n) * x) = -: - / cos (m (+ , -) n) * x

Rotations
Since (r , V + x) = O x oV r , V then
 x0 = r * cos V
 y0 = r * sin V
 x1 = r * cos V + x
 x1 = r * ((cos V) * cos x) - (sin V) * sin x
 y1 = r * sin V + x
 y1 = r * ((sin V) * cos x) + (sin V) * cos x
 x1 = ((cos V) * x0) - (sin V) * y0
 y1 = ((sin V) * x0) + (cos V) * y0
the rotation matrix for x is 2 2 $ (cos , (- sin) , sin , cos) x
dilation matrix compositions of actions as multiplications of matrices

Some Analytic Geometry

The Straight Line Again
the plot of points for which c = F yields 1 is called the graph of F
an arbitrary point on the graph of ]=a* has the form (1 , a) *
a point on the graph of ] = (- ]) is of the form (1 , -1) *
the graph of [ = (b + a *) is a straight line parallel to the graph of [ = a * ]
 y1 =: y - b so y1 = a * x with points of the form (x , a * x) and [ = (b + a 
*) are (] , (b + a *))
the slope of a line that is the graph of [ = (b + a * ]) is a
*. / (y0 = b + a * x0) , y1 = b + a * x1 implies *. / ((y1 - y0) = a * x1 - x0) 
, a = (y1 - y0) % x1 - x0
(a = (y - y0) % x - x0) iff ((y - y0) % x - x0) = (y1 - y0) % x1 - x0
0 = c + (a * x) + b * y  equation of a line

The Parabola
(y - b) = c * (x - a) ^ 2 is called a parabola

The Ellipse
((a , b) *) shear dilation
1 = + / *: (u % a) , (v % b) is an ellipse

The Hyperbola
c = x * y is a hyperbola

Rotation Of Hyperbolas
c = - / *: y , x





20151029T1450 Notes on Lang's Basic Math and N

In a previous version of this website I included my notes on Lang's Basic Math 
in math.html.
My purpose in creating those notes was to use what is now an old version of N 
in order to make all of the mathematical statements which are characteristic of 
basic math.
After doing so I was in a better position to judge the utility of that version 
of N in replacing classical notation.
There is not just a question of utility though, as I realized while producing 
those notes.
There is a genuine aesthetic sense that starts to build up once you start using 
N to organize large swatches of basic math.
If I am going to put in the effort to design a notational language, then I want 
it to have a feeling that enhances its mechanics: a sort of independent sense 
of "self-respect".

At the time of taking those notes I was not as deeply familiar with Goodstein's 
work, and had not confronted the problem of using non-primitive recursive real 
numbers.
The ability to avoid the classical constructions with real numbers using 
Goodstein's Equation Calculus has been a huge influence on the design and use 
of N.
In the near future I will hopefully return to Lang's Basic Math in an attempt 
to fit what is relevant into the new notation and rejudge its fitness for 
covering those classical topics.

One thing which I am reminded of while reviewing my notes on Lang is that there 
is a book by Goodstein on Projective Geometry which I can not get my hands on, 
and which I can not seem to see as being sold on any internet site.
There is also another book by goodstein on the foundation of mathematics which 
I can't find.
I hope to soon have the resources to find these texts and learn what I can from 
their insightful author.





20151028T1531 Compound Nouns in N and J

Arrays are a multidimensional generalization of linear lists.
One way of describing an array is as a collection of orthogonal lists.
One list is orthogonal to an other if they share exactly one item in common.
A visual example of a pair of orthogonal lists is as follows:

0
1
2 3 4
3

Here the pair of orthogonal lists are (0,1,2,3) and (2,3,4) and the item which 
they share in common is 2.
It is important to note that though both lists contain the numeral 3 they do 
not share the same item: the numeral two is the only numeral shared by both 
lists depicted in the above image.
This corresponds to a common situation in which the same node of memory in a 
computer is shared by a pair of linked lists.
It is common for people to suppose that such nodes are only shared by a linked 
list because they think of it being easy for two distinct lists to point to the 
same node, but, in general, it is much more complex than a novice programmer 
might believe.

What is meant by that last comment is that a collection of orthogonal lists is 
more than just a collection of lists, it is also a binary operation on the 
lists which returns the item the they have in common (and it must return 
exactly one item except in the case where the argument to the binary operation 
is any of its diagonal elements, in which case it seems adequate to define some 
item which indicates that the argument lists are not orthogonal).

The generalization of the orthogonal lists as defined here is a set of lists 
with a binary operation on that set which gives a linear list of items which 
both lists share.


 


20151026T1437 Further Reason for unary % to be integer square root

In a list of POW functions given at 
https://en.wikipedia.org/wiki/Proof-of-work_system we see that the remainder of 
integer square root of a number divided by a large prime is at the top of the 
list (this is because it is primitive problem that can be made relatively 
difficult based on the choice of "large prime"):

"
List of proof-of-work functions[edit]
Here is a list of known proof-of-work functions:

Integer square root modulo a large prime[1]
" https://en.wikipedia.org/wiki/Proof-of-work_system 20151026T1438





20151026T1241 Notes from RNT.html

These are some notes that I had put into RNT.html, but which I no longer wish 
to be there.
I'm going to turn RNT into more of a collection for relevant information and 
not so much for my notes on RNT.
Notes on RNT will probably be better put in here and then later processed into 
something more cohesive and harmonious with RNT:

Goodstein is missing a rule which permits the introduction of the letter x (or 
other rules for the other letters for that matter).
Such a rule is essential for it is the introduction of x that "starts" the 
whole game off.
(Alternatively it is that rule which introduces 'information' to the system.)

                            antecedent
--- introduce lowercase-ex  action
 x                          consequent

The rules 1 and 2 would be written schematically as

   x
----- introduce the successor of
 1+x

 x
--- eliminate lowercase-ex (Introduce zero)
 0

Interestingly, the rule 'Eliminate lowercase-ex' can also be thought of as 
'Introduce zero'.

Notice also that Goodstein gives a simultaneous definition of numeral and 
numeral variable.

A numeral is the result of a sequence of events having the following form:

[1] Introduce lowercase-ex
[2] Introduce 1+ or stop.
[3] Eliminate lowercase-ex (introduce zero) or go to [2].

For now, there is no way to say what method one must use to "replace x by 1+x".
This 'generalizing abstraction' is encapsulated in the event schema I've used 
above.

Antecedent and consequent events are referenced via intuitive pictures.
In these rules, the event 'an occurrence of a lowercase-ex' is referenced using 
an occurrence of a mark of type lowercase-ex.
The events 'between' the antecedent and consequent events are suppressed using 
either a single line or a double line.

Here a single line is denoted as repeated occurrences of a hyphen or minus-sign 
'-'.
A double line is denoted by repeated occurrences of an equality-sign '='.

The elision or reference to component events is of principle interest to 
mathematician in some instances and the philosopher or logician in others.

Those rules which are basic are likely to interest logicians or philosophers.
Those rules which are derived are likely to interest mathematicians or 
logicians.
The products of these rules and their methods are likely to interest scientists 
or engineers.

Using two rules one can describe just the numerals.

--- Introduce zero
 0

   0
----- Introduce the successor of
 1+0

Using these rules, a numeral may be defined via the following method:

[1] Introduce zero (0)
[2] Introduce the successor of (1+), stop, or go to [2].

Example results of this method are

0
1+0
1+1+0
1+1+1+0
1+1+1+1+0

There are many reasons for considering these rules and their use to define 
numeral:
 its simplicity is beautiful;
 it can be used to build a canonical system in which future systems may be 
interpreted; and
 it points to Kleene's generalized arithmetic from his Introduction to 
Metamathematics.
Here there is a single kind of zero and successor.
In Kleene's generalized arithmetic there are many kinds of zero and successor.

In reference to Kleene, we may add that only rules [1] and [2] given above are 
allowed to produce numerals.
This being an instance of the limiting or bounded part of our inductive 
definition.
The principle purpose of this limiting statement is to exclude numerals 
introduced by any action which has not been explicitly mentioned.

To the general public mathematics is a behaviorally conditioned performance.
My use of schema is meant to reinforce this interpretation.
To 'do math' is to identify and act upon antecedent events which are 
structurally similar to those often labeled 'mathematical' so as to bring about 
the relevant consequent.
The clear and exact identification and specification of relevant antecedent and 
consequent events is an ongoing process.

As is the case in Landau's 'Foundations of Analysis', Goodstein rarely uses our 
decimal abbreviations for numerals greater than 9.
(besides, as is also the case in Landau, as a means for numbering relevant 
sections and expressions)





20151025T1635 Representation in J

The representations in J are of principle importance to using the language in 
the most vivid way possible.
The representations used in J are as follows:

atomic
boxed
tree
linear
paren
explicit

Below I've included the descriptions from the jsoftware dictionary as to how 
each of these representations function in practice.
My interest in them is their use in understanding the code that has been 
written.
In the past, John von Neumann spent some time developing methods of visually 
representing the flow of control in an algorithm using boxes and arrows.
Iverson employed an updated version of this original idea in his book A 
Programming Language.
It is no longer popular to write out an algorithm visually except in the case 
where the visuals are exceptionally vivid and are perhaps necessary for a human 
to understand the structure of the algorithm being presented.
One reason for considering alternate representations of a section of code is 
because certain structural properties can be used to identify logical errors 
that would be impossible for even the cleverest compiler to spot.
A pointless example is the common logical error of misplaced parenthesis (or 
lack of parenthesis where they should otherwise be) in order to capture the 
proper calculations in the proper order.

Though using graphical visuals to program is not popular in most professional 
programming environments, it finds free reign in tools meant to teach children 
how to command little robots about an environment based on a limited 
instruction set.
For example, Lego has produced a number of products over the past ten or so 
years that have sported a variety of visual interfaces to programming a child's 
(or enthusiasts) self made robotic creation.

The following descriptions are from 
http://www.jsoftware.com/help/dictionary/dx005.htm

"
Atomic. The atomic representation of the entity named y and is used in gerunds. 
The result is a single box containing a character list of the symbol (if 
primitive) or a two-element boxed list of the symbol and atomic representation 
of the arguments (if not primitive). Symbol-less entities are assigned the 
following encodings:

0  Noun
2  Hook
3  Fork
4  Bonded conjunction or train of adverbs

For example:
   plus=: +
   5!:1 <'plus'
+-+
|+|
+-+
   noun=: 3 1 4 1 5 9
   5!:1 <'noun'
+---------------+
|+-+-----------+|
||0|3 1 4 1 5 9||
|+-+-----------+|
+---------------+
   increment=: 1&+
   5!:1 <'increment'
+-------------+
|+-+---------+|
||&|+-----+-+||
|| ||+-+-+|+|||
|| |||0|1|| |||
|| ||+-+-+| |||
|| |+-----+-+||
|+-+---------+|
+-------------+
"

"
Boxed. 
   nub=: (i.@# = i.~) # ]
   5!:2 <'nub'
+-------------------+-+-+
|+--------+-+------+|#|]|
||+--+-+-+|=|+--+-+|| | |
|||i.|@|#|| ||i.|~||| | |
||+--+-+-+| |+--+-+|| | |
|+--------+-+------+| | |
+-------------------+-+-+
"

"
Tree. A literal matrix that represents the named entity in tree form. Thus:
   5!:4 <'nub'
            +- i.
      +- @ -+- # 
  +---+- =       
  |   +- ~ --- i.
--+- #           
  +- ]
"

"
Linear. The linear representation is a string which, when interpreted, produces 
the named object. For example:
   5!:5 <'nub'
(i.@# = i.~) # ]

   5!:5 <'a' [ a=: o. i. 3 4
3.14159265358979324*i.3 4

   lr=: 3 : '5!:5 <''y'''
   lr 10000$'x'
10000$'x'
"

"
Paren. Like the linear representation, but is fully parenthesized.
   5!:6 <'nub'
((i.@#) = (i.~)) # ]
"

"
Explicit. The left argument is 1 (monadic) or 2 (dyadic); the right argument is 
the boxed name of a verb, adverb, or conjunction. For example:

   perm=: 3 : 0
    z=. i.1 0
    for. i.y do. z=.,/(0,.1+z){"2 1\:"1=i.>:{:$z end.
   )

   1 (5!:7) <'perm'
+-+----------+-------------------------------+
|0|1 _1 0    |z=.i.1 0                       |
+-+----------+-------------------------------+
|1|65536 2 1 |for.                           |
+-+----------+-------------------------------+
|2|2 _1 1    |i.y                            |
+-+----------+-------------------------------+
|3|131072 6 1|do.                            |
+-+----------+-------------------------------+
|4|1 _1 1    |z=.,/(0,.1+z){"2 1\:"1=i.>:{:$z|
+-+----------+-------------------------------+
|5|32 3 1    |end.                           |
+-+----------+-------------------------------+
The result of 5!:7 is a 3-column boxed matrix. Column 0 are the boxed integers 
0 1 2 ... n-1. Column 1 are boxed 3-element integer vectors of control 
information: control word code, goto line number, and source line number. 
Column 2 are boxed control words and sentences.

The result of 5!:7 is a 0 3 empty matrix if the named object is not an explicit 
definition, or is undefined for the specified valence.
"





20151025T1554 Lists, Tables, Arrays, Cells, Frames, Boxes

One of the biggest differences between J and k is the way in which they deal 
with heterogeneous and homogeneous data structures.
Here a homogeneous data structure is one whose atoms are all of the same type.
A heterogeneous data structure is one whose atoms are of a different type.
In J, an array of nouns must be homogenous.
There are many reasons that one might wish to use this constraint.
The first is simply a result of being built atop the C programming language 
where arrays must be homogenous.

For computing, homogeneity simplifies the allocation scheme (assuming that the 
primitive types have a well defined "size" in storage and that the homogenous 
structure is not "too big" relative to the type of memory in which it is being 
stored).
For mathematics, homogeneity simplifies the use and proof of properties of a 
structure.

k allows heterogeneous LISTS (in the LISP sense) where as in J the closest 
thing is a list of boxes each of which are atomic but which can contain (upon 
being "opened") a noun of any type (perhaps even another box).
So that a tree in k is a list and a tree in J is a one dimensional array of 
boxes.

(Though in J a tree might also be represented as an array of dimensions having 
rank greater than one. This same structure of boxes might also be used to 
represent forests, although it is in no way "better" than k's use of LISTS. In 
general, one representation is rarely "better" than another in all but the most 
trivial way (i.e. in its efficiency) rather there are usually design trade offs 
(though these trade offs could potentially be included in the term efficiency 
if it is widened to include the entire process of which any algorithm is only a 
part)).





20151023T1204 Notes on Chapter Two Information Structures of Knuth's TAOCP

It will be shown that N simplifies Knuth's presentation.

"
Computer programs usually operate on tables of information.
" Knuth TAOCP V1 pg. 232

Calculate with tables of data.

"
Our concern will be almost entirely with structure as represented inside a 
computer
" Knuth TAOCP V1 pg. 232

"
The information in a table consists of a set of nodes (called "records," 
"entities," or "beads" by some authors); we will occasionally say "item" or 
"element" instead of "node." Each node consists of one or more consecutive 
words of the computer memory, divided into named parts called fields.
" Knuth TAOCP V1 pg. 233

Using N, a node can be represented as a list of boxes: the contents of each box 
represents one of its field.
For now, the "size" of each box is unspecified and is assumed to be big enough 
to contain its field without causing overflow (or underflow).

For example, the N expression (123; "abc"; 0) is a node whose first field has 
the value 123, whose second field has the value "abc", and whose third field 
has the value 0.
If you were to put (123; "abc"; 0) into an N interpreter you would observe the 
following behavior:

 123; "abc"; 0
+---+---+-+
|123|abc|0|
+---+---+-+

That is, you would be shown a visual representation of the node and its fields.
In english the expression (123; "abc"; 0) would be read "one hundred twenty 
three linked to the string lowercase-a lowercase-b lowercase c linked to zero".
In N, the verb link ; is a simplified way of putting things into boxes and 
forming an array of boxes.
Another way of saying (123; "abc"; 0) is ((<123), (<"abc"), <0).
So, link lets you avoid having to put each noun in a box before joining them 
together.

Alternatively, a node can be, more faithfully but less expressively, 
represented as a two dimensional array of binary digits.
The number of columns represents the length of a word in the computer you're 
working with, and the number of rows represents the number of words needed to 
contain the fields of your node.
Each row may contain more than one field, and the contents of that field must 
be accessed by knowing its location in the word (row) and its "size" (the 
number of digits needed to encode the data of that field).

Although, in modern computers, a node is most likely represented as a list of 
machine words (i.e. a one dimensional array of binary numerals having the word 
length of the computer) and the fields are extracted from each word by 
transforming them into their morphic word form using boolean arithmetic or via 
some alternate method.

For example a word 11001010 may have a field represented from the second from 
the left digit to the fifth from the left digit.
This can be extracted by first selecting only those digits using the word 
01111000 and then shifting the result to the right two places: 00001001 .

Alternatively, the transformation from word to list of words representing the 
digits of that word are achieved through the use of the N verbs _ and | (base 
and digits).
Let w be a word with 8 binary digits.
Then

w ~ (2 2 2 2 2 2 2 2 | w) _ 2

There are an unlimited number of ways of representing a node and its fields 
using math and physics, but only a very small finite number of them are helpful 
to humans and computers alike.

Leaving behind this digression we go onto a concrete example of encoding data 
into nodes and fields.

"
suppose the elements of our table are intended to represent playing cards; we 
might have two-word nodes broken into five fields, TAG, SUIT, RANK, NEXT, and 
TITLE
" Knuth TAOCP V1 pg. 233

For now, we assume that our node is a list of boxes each of whose contents 
represents a field of that node.
A node will have five fields hence it will be a list of five boxes.
Following Knuth's naming convention for each field we can define the fields as 
follows:

tag  : >0#
suit : >1#
rank : >2#
next : >3#
title: >4#

Thus, tag is the contents of the box at index 0 ("tag is open zero take") and 
so on.

If N is a node then

1 ~ tag N

means that the card represented by N is face down and,

0 ~ tag N

means that the card represented by N is face up.
If one wanted, they could more faithfully reproduce the notation used by Knuth 
using the following definitions:

TAG  : >0#
SUIT : >1#
RANK : >2#
NEXT : >3#
TITLE: >4#

and when writing these functions include parenthesis (which are unnecessary in 
the expressions shown above under N's simple right to left evaluation rule).

1 ~ TAG(N)
0 ~ TAG(N)

The one thing which you will not see me do is to write these relations using = 
because = is the arithmetic operation called "positive difference" where as ~ 
is a similarity judgement (the only judgement used by N).
A programmer might think of a statement like (x ~ y) as an assertion, something 
you might use for a pre-condition or post-condition.

So the predicate "is face down" is represented as

1 ~ TAG

One might abbreviate this predicate (and its sister) as follows

IsFaceDown: 1~TAG
IsFaceUp  : 0~TAG

Now, it is possible to get closer to Knuth's use of = by representing true and 
false as the following forms of judgement:

TRUE:0~
FALSE:1~*

So that,

IsFaceDown: TRUE 1 = TAG
IsFaceUp  : TRUE 0 = TAG

In this case 1=TAG is the composition of TAG followed by 1= which is the 
projection of positive difference having left argument 1.
Though, I for one, prefer to avoid referring to true and false at all costs not 
only because they are still vague concepts, but because they ultimately seem to 
have no purpose in the study of mathematics.



20151023T1142 Philosophy, Data Structures and N

It is one of the principle purposes of the design of my notational language N 
to remove barriers to clear and exact thought.
As such I must test its fitness to clarify my own thoughts and perhaps even the 
thoughts of others (sadly I do not have N in a state where it exists outside of 
my mind i.e. I haven't yet written an interpreter for its parts as its parts 
are still being developed).
I will be introducing a section on my website describing data structures with N.
In this case N is meant to be used as a notational language, but it also serves 
as a programming language for those familiar with the standard interpretation 
of N (the standard method of evaluation).
For now I am the only person who has any direct knowledge of how an N statement 
is to be evaluated in general.
This is in part because I have not settled on a final system, and also because 
I have not experimented with different implementations of the language (I 
haven't even put it into a single language).
Thankfully, the descriptions of basic data structures in Knuth's The Art of 
Computer Programming: Fundamental Algorithms are written in English and I need 
only process that information into N in order to "put it to the test".

Data Structures

Calculate by operating on tables of data.
Data are records of events.
Properties of the occurrence of events are transformed into events that make it 
easier for us to perform calculations upon them.
Data structures are general principles for recording events.
Relevant properties of an event are transformed into a state of a table of data.

A calculation is a certain sequence of events e.g. performing long division on 
a pair of numbers, or running a program to sort a list of numbers in a computer.
When we calculate we act on a certain collection of events (antecedents) that 
have "just" occurred, and transform them (via intermediate events) into 
occurrences of events.

There is a lot to be said about this philosophical goop, and clearing it will 
take time, I'm going to stop here and start a new note that is dedicated to 
summarizing the results from Knuth's Chapter 2 on Information Structures using 
N.





20151021T1421 Table from Hui's Implementation of J's Types

I've copied the following table from Hui's "An Implementation of J" because it 
is likely that I will reference it in the future and it gives a clear 
correspondence between the types used in J and how they are mapped onto types 
in C:

AT(x) C  Description
BOOL  B  Boolean
CHAR  C  literal
INT   I  integer
FL    D  floating point
CMPX  Z  complex
BOX   A  boxed

VERB  V  verb
ADV   V  adverb
CONJ  V  conjunction

NAME  C  name
LPAR  I  left parenthesis
RPAR  I  right parenthesis
ASGN  I  assignment
MARK  I  parser marker
SYMB  SY symbol table

Following this table there is an explanation that, internally, types are 
fullword integers that are also powers of two so that one can produce and use 
the following definitions:

#define NUMERIC (BOOL+INT+FL+CMPX)
#define NOUN    (NUMERIC+CHAR+BOX)

and use them in the following phrases:

NUMERIC&AT(x)
NOUN&AT(x)



 

20151020T1721 My Developing Style of Programming in C

As I wrote in an earlier post (20151019T1705) on building an interpreter for N 
based on Hui's implementation of J, there is much to be said about the way that 
APLers use (or abuse) C.
As I've programmed in C I've found myself bouncing between two extremes in my 
opinion of it: love and hate.
At certain times I have loved some concepts (such as pointers) and have then 
gone on to hate them.
These mixed emotions are the mark of difficult design decisions that were made 
at different times and under different circumstances.
One could look towards history in order to understand the development and 
subsequent spread of C across the world of computing, and use the 
socio-political context to explain why one design decision may have been made 
over another.
It would be an interesting study, and one which I am not currently capable of 
undertaking (even if I was capable it does not seem like the right time for me 
to dedicate my time to such a study, there are other things I think I should be 
work on).

Ultimately, whatever design decisions the creators of C made, the only design 
decisions that matter are those that are enforced by your local compiler.
No matter how well written a specification one gives, there is no guarantee 
that such specifications will be enforced.
It is the same problem with laws which one might wish to write and enforce 
throughout the world.
The ability to enforce a law is the limit of that law's power in practice (even 
the most well crafted law, one which appeals to universally understood 
principles of everyday human life, must be enforced either compulsively or 
impulsively in order to have any relevant impact).

In the development of a programming language like C, there are certain laws 
which are written so as to make it impossible NOT to enforce them.
Either they are tied to some characterizing feature of a language, or they are 
seemingly indispensable as a result of an appeal to "common sense".
It is often the laws which appeal to "common sense" that have the most 
surprising results in practice.

C macro definitions are, for me, an example of an application of "common sense" 
that has had more than its fair share of unintended consequences.
So far, some of the most interesting uses of macros is the result of thinking 
like an APLer.

For me, the most powerful macro definition that I've incorporated into my 
everyday C code is something I saw in Hui's record of Arthur Whitney's one page 
interpreter for J

#define DO(n,x)  {I i=0,_n=(n);for(;i<_n;++i){x;}}

It is a wonderful macro and immediately simplifies (both conceptually and 
notationally) a MASSIVE family of methods for solving problems with C.

For now, I use the following similar macros:

typedef int I;
#define F(e...) for(e)
#define I(m,e...) {I i=0,_i=(m);F(;i<_i;++i){e;}}
#define J(n,e...) {I j=0,_j=(n);F(;j<_j;++j){e;}}

First, there is no conflict between the use of I to refer to the type int and 
the use of I(m,e...) as function-like macro.
Specifically, the preprocessor performs the relevant expansion for all I(m,e) 
expressions and then the compiler takes care of interpreting I as int from the 
typedef expression.

Second, there is no conceptual conflict between the use of I to refer to a 
macro expansion and to refer to a type when declaring variables.

Third, the use of I and J are used to reinforce the convention of referring the 
the I,J-th cell of a two dimensional array.
This is further reinforced by using m and n as the first argument to the 
function-like macro definition.
m is commonly used to name the number of rows of a matrix, and n is used to 
name the number of columns.

It is not necessary that I(m,e) and J(n,e) be used only for matrix (two 
dimensional array) like operations, but that one can and should organize their 
thinking around such pictures when they help (and they often do when 
programming with C).

The 'e...' is used to refer to an expression whose command will be followed for 
each item along the I-th or J-th axis up to (but not including) the limit 
specified by the first argument of I(m,e) or J(n,e).

Suppose you wish to take a list of m*n space separated integers into an m by n 
array.
Assume m and n are less than 100.
Using these macros you could this as:

I a[100][100];
I(m,J(n, scanf("%i",&a[i][j])))

Alternatively, you could simply read in the m*n integers into a single array 
and deal with the shape of the data using m and n specifically:

I a[10000];
I(m*n,scanf("%i",&a[i]))

Suppose the integer list has been read into a single array and that you wish to 
output the m by n array to stdout.

I(m, J(n, printf("%i ", a[n*i+j])); printf("\n"))

All together you might write this simple program as follows:

#include<stdio.h>

typedef int I;
#define R return
#define F(e...) for(e)
#define I(m,e...) {I i=0,_i=(m);F(;i<_i;++i){e;}}
#define J(n,e...) {I j=0,_j=(n);F(;j<_j;++j){e;}}

I main(){
 I m=3,n=3,a[9];
 I(m*n,scanf("%i",&a[i]))
 I(m, J(n, printf("%i ", a[n*i+j])); printf("\n"))
 R 0;
}





20151020T1615 Tables of Data, Pointers, Box <, and Unbox >

For now the notions of box and unbox (open?) are included in N's basic 
vocabulary.
They are analogous to the C operations of & and * which, respectively, give 
a pointer to a noun or the noun to which that pointer points.
In simpler language: you can put things in boxes and you can take them out of 
boxes.
These boxes are located somewhere, and sometimes these boxes are right next to 
each other.
Boxes that are right next to each other are usually called one dimensional 
contiguous arrays of data.
There are a lot of fancy concepts related to pointers and boxes and all the 
wonderfully grotesque things you can do with such powerful and primitive 
abstractions.

Ultimately, my reason for considering the verbs box < and unbox > is not 
because they reflect the pointer "objects" of the C programming language.
Rather, they can be said to have a number theoretic origin, or even a 
"philosophical" origin.

The philosophic origin is a consequence of using distinct signs when 
calculating.
A method of defining distinctness of signs is to "separate" the one from the 
other by enclosing them in "disjoint" boxes.
I do not have time here to explain the details of this argument, but its 
essential feature is that whatever it means for two things to be "distinct" as 
signs that we use to calculate we must have a concept of "box" in order to 
decide whether two signs are distinct.
One way of thinking about this is to imagine that you've written the letter 'a' 
onto of the letter 'b'.
Both occurrences are kind of muddled together and you might be able to 
distinguish that there is a letter a and a letter b but as they are written on 
top of each other you can not simply say that they are distinct signs for they 
might be a single sign that happens to "look like" the letter a written over 
the letter b.
One of the reasons I've had to create N and that I became interested in J in 
the first place is because of this very problem of defining distinctness of 
signs.
My ultimate goal is not to appeal to any purely physical interpretation of an 
occurrence or instance of a sign, but rather identify what design constraints 
are unavoidably necessary for anything derived from the use of signs as not 
necessarily physical "things".

To avoid all of these vague and ultimately irrelevant philosophical 
digressions, I'm inclined to pursue the notions of box and unbox without having 
reached any certain conclusions regarding their role in the definition of 
distinct.

Box and unbox are unary verbs denoted by > and < .
They are conceptually similar to J's verbs box and open.
Which are themselves conceptually similar to the use of pointers and arrays in 
C.
Any introduction to J's use of box and open explains their primary use is in 
the construction of heterogeneous arrays.
In J there is a family of nouns that are called arrays and each of the atoms in 
an array must be of the same type.
This means that if you have a J array and you know that one of the atoms is a 
character then you know that all of the atoms are characters.
Frequently, you want to indicate relationships between atoms of different 
types: perhaps you wish to associated the name of a person with the age of that 
person.
In C the concept used to encode this idea is a structure and the special word 
'struct' is reserved for such situations.
J, and ultimately N, take a much richer approach to the methods of relating 
data of differing types to each other.

First, in J, boxes are atomic nouns.
This means that one can build an array of boxes without violating the condition 
that the array of nouns be homogenous (i.e. that every atom of the array be of 
the same type).
From this perspective box is a type of atomic noun.
Given a noun A one can box A (written <A) in order to "put A in a box".
So if we write (in J)

   A =: 123
   B =: <A

then,

   B
+---+
|123|
+---+

The output of sending B to the interpreter is a visual representation of the 
fact that B is a box whose contents are the noun referred to by the name A.
When one works with B, one need not know anything of B's contents, except that 
it is a box, but can use the fact that B contains a noun of a certain type in 
order to build up relations using spatial relations between boxes in an array.

Thus, with boxes, one can truly realize the supreme principle of computing: 
compute with data tables.

The arithmetic of C pointers is ultimately less expressive than the concept of 
boxes from J, and yet they can be used to serve the same purpose.

So, knowing that B is a box containing A, we can get at the contents of B by 
"opening" or "unboxing" it.

   >B
123

Using arrays of boxes one can provide a complete implementation of C structs.
Whether that is a good thing or not is something I currently have nothing to 
say about.

My primary purpose in using box is that it is analogous to the use of powers of 
primes to encode an array of data into a single numeral.
Using powers of primes is not an efficient way to box and unbox, but it is a 
primitive and essential concept which leads to a number of fundamental and 
impactful results in basic number theory and discrete math.

From a practical perspective, we know that most computer hardware stores data 
in "boxes" which are composed of memory cells (e.g. in DRAM a bit is modeled as 
a capacitor, and a line of capacitors are used to store a word).
These memory cells are located in specific places, and through a combination of 
changes in relative voltages across specific lines, a computation occurs on the 
data stored in a specific location.
To perform a computation on data stored in a given location, we must know the 
location at which the data its stored (this being a characterizing part of most 
modern computations).
Though it may be possible in the future to produce methods of storage, reading, 
and writing which make it difficult to give a physical "location" to the stored 
data, there will still be a point along the "scale of computation" that a human 
must decide whether data belongs to, or came from, one box or another.
This alone is perhaps one of my stronger arguments for accepting the notion of 
box at some level.
Whatever concept it is that is encapsulated in the notion of box and its 
relation to "data location" I believe it to be an important one, one worth 
working with in any actively developed programming language.

An alternate argument is that as long as we believe computing is something 
which takes place with respect to tables of data we will need to know to what 
part of a table a specific piece of data belongs in order to not make a 
complete mess of things.

It was not my purpose to write so much about the arguments for or against the 
use of the "box" concept in computing.
I'm done for now.





20151019T1705 Work on Building and Interpreter for N

For now, N is an dynamically typed interpreted language, and though it is 
likely to have a compiled component, I will not consider that now for I can not 
spread myself so thin at such an early stage of its development.
Rather than waste time over details of the compilation of a language like N, I 
will base my current N interpreter on Hui's implementation of a J interpreter.

Hui's implementation of J is written in C.
The modern J compiler (interpreter) is written in C as well and shares much of 
the same structure as its original version written by Hui.
As with any semi-popular language, there have been a number of updates and 
changes since it's original implementation.
I am not able to review all of the changes that have been made since Hui's 
original implementation, and wish only to extract from J's current and past 
interpreter the basic data structures and algorithms needed to implement a 
functioning interpreter for N.

The principle data structure used in Hui's implementation of J is the, so 
called, "APL array".
It is a simple yet powerful way of representing all of J's words in a way that 
can be easily manipulated by the basic tools available in C (in particular 
macros).
The data type of the APL Array in Hui's C implementation of J is named A.
I will adhere to that convention in what I write here.

The C data type A is defined as follows:

typedef long I;
typedef struct {I t,c,n,r,s[1];} *A;

There is much to say about the way in which Hui and other APL-ers use (or in 
some people's perspective "abuse") the C programming language.
I will continue to write about this in a future note.





20151019T1538 Keeping sets without needing them

The importance of sets in modern mathematical and computational arguments is 
seemingly indispensable.
I say seemingly because, as has been thoroughly shown by MacLane and his 
categorical followers, there is no reason to commit ourselves to a concrete 
notion of object: morphisms are in themselves adequate for the development of a 
modern morphic analog to set-class theory.

In practice, there is a certain synthesis of set theoretic and category 
theoretic ideas which combine to give a smooth passage between one perspective 
to the other.
This passageway is often paved with equivalence relations and quotient sets.
In particular, to each function between a pair of sets there is an associated 
equivalence relation, defined as equivalence by value, and to this equivalence 
relation is an associated collection of equivalence classes.
When the function in question carries a relevant structure from one set to the 
other (a structure which is encoded in functional relations on that set, for 
example in a unary and binary operation for a group), then the equivalence 
classes are often endowed with a similar or relevant structure.
The symmetries of these structures are usually identified through appropriate 
functions (morphisms), but their "concrete" instances are classically 
identified with sets of equivalent items.

Rather than deal with the totality of a class-set of equivalence classes, the 
existence of which is unlikely to ever be fully established with ultimate 
satisfaction, it seems easier to manage the test used to establish the 
equivalence of two items.
The decision as to whether two items are equivalent under a given function 
(morphism, or in particular primitive recursive morphism) is a bridge between 
the category theoretic perspective and the class theoretic descriptions of 
mathematics.
It not only appeals to our innate desire to seek out similarities which 
simplify, but provides an accessible and rich access point to the category 
theoretic and set theoretic perspectives on mathematics (the one beginning from 
composition the other on belongingness i.e. the one based on act and the other 
on object).





20151019T1435 In C += is a hook

In the C programming language the word += is a hook of + and = .
A hook is the general structure where for +=

x += y

is equivalent to

x = x + y

It is one of the few ways of meaningfully composing a pair of binary operations 
into a single meaningful binary operation.
All of these structures are exhibited between the combinations of plus and 
monus with pronoms x and y.





20151018T1923 Giving a Clear and Exact Description of N

The alphabet is the visible ASCII characters together with the space character:

a b c  1 2 3  A B C
d : e  4 5 6  D ; E
f g h  7 8 9  F G H
i j k  - 0 +  I J K  
l m n  % | *  L M N
  o    < = >    O   
p q r  ! _ ?  P Q R
s . t  $   #  S , T
u v w  & " @  U V W
x y z  ` ^ '  X Y Z
[ { (  / ~ \  ) } ]

Sentences are lists of letters from the alphabet.
Words are lists of letters which do not have a white space character.
Thus a sentence is space separated morphic to a list of words.

There are seven types of words:

noun
verb
adverb
pronom
copula
punctum
compound

A compound word is a list of nonspace letters morphic to a list of noncompound 
words.
Thus, every compound word is morphic to a sentence of space separated 
non-compound words.
Furthermore, every sentence containing compound words is morphic to a sentence 
in which each compound word has been replaced by a space separated list of 
noncompound words enclosed in punctum.







20151018T1903 Reverse|Transpose with $

The unitary verb $ stands for transpose for any array whose rank is greater 
than one.
When the rank of the noun is equal to one then $ means reverse.
I'm not sure what $ should do for an atom.





20151018T1331 Progress on The Development of N and other such things

The origin of N is in my desire to bring the power of mathematics to the widest 
audience possible.
As a consequence of this personal interest I stumbled upon Goodstein's 
brilliant foundation of mathematics described in his Recursive Number Theory.
It is unlike any other foundation I have come across, and its features are 
precisely those which appear necessary for any foundation of mathematics 
designed to bring the greatest amount of power to the largest group of people 
(mathematicians and non mathematicians alike).
It is my belief that by seeking mathematical methods which are crafted to be 
fit for the layman as well as the expert will ultimately bring some much needed 
clarity to otherwise obscured mathematical truths.

Much of mathematics is a mystery to many people, and this is due in large part 
to the tools for teaching mathematics.
There are some who have an innate desire to reach for the abstractions and 
imaginations that modern mathematics demands of any one curious enough to seek 
out its knowledge and wisdom.
For us remaining mortals, there are certain barriers not only to understanding 
much of modern mathematics, but in finding a use for it as an everyday tool for 
thought.
The most frequently used mathematical notation is that for denoting quantities 
using decimal numerals.
We write down how much money we have spent at the store that day, and see if 
there is enough money in our bank account to accommodate a future purchase.
For the modern person, the most sophisticated mathematics they will be asked to 
use is likely to be geometric series used in the calculation of interests from 
an interests rate.
Though, it is clear that even this is beyond the reach of those who need the 
knowledge afforded from such "simple" calculations in the first place.

There are certainly a larger number of mathematical facts which are much 
farther removed from geometric series which have a direct impact on everyday 
life, though what comes to mind is often needlessly abstract or technical.
One hears of the math behind the computers or the cars that we use everyday in 
a complete unconscious way.
Not only is our use of the machines unconscious, but it is highly unlikely that 
one thinks of the development in mathematics necessary for the creation of 
almost all modern tools.

For the most part, the ability for one group to use mathematics with greater 
speed and accuracy over another is often the deciding factor between success 
and failure.
Most believe this is because one group is able to better manage their finances 
than another.
By mitigating financial risk one firm might hope to outlive another or perhaps 
eliminate its involvement all together in the competition of a certain market.
Sadly, all the mathematics in the world seems to leave most misers with little 
more than hours wasted looking at pointless plots, graphs, and charts.
Until the mathematics used to describe the world can be read and understood as 
we might these words upon the page, there will continue to be entire sectors of 
humanity which might otherwise avoid suffering if not for their fear or 
ignorance of mathematics and mathematical methods.

N is meant to straighten out the knotted mess that mathematics has become to 
the general public.
The importance of mathematics in our everyday life has become more apparent 
than it has ever been in the past.
As the discipline of computer programming has become accessible to the general 
public, and as children take for granted all the benefits of our hard earned 
advances in technology, there is a renewed interest in the most basic of 
mathematical problem solving skills in even the simplest of jobs.

The most thorough going description of computer programming of which I am 
familiar is Knuth's The Art of Computer Programming which is a monumental 
testament to the power of computation in our modern age.
And yet, there are few programmers, those who practice the art of computer 
programming, who are equipped to confront the mathematics found throughout his 
powerful books.
And yet, if you have access to the beautiful results he has so simply displayed 
for the working computer programmer, then you will find that much of the work 
you do has a simple and speedy resolution.
It seems that much can be done to bridge this gap between what is common 
knowledge and what is knowledge necessary for surviving in our ever advancing 
technological world.

That computer science or computer programming is seen as something separate 
from mathematics is not only a philosophic problem but one of practical 
importance.
Just as we teach children the elements of arithmetic we must extend these basic 
algorithms to include what is common knowledge to a working computer programmer.
There is no need to mention the word computer in such cases: it is plain and 
simple math.
Just because the basic algorithms of modern computer science do not look like 
the classical algorithms for addition, multiplication, subtraction, and 
division doesn't mean they are not just as important to everyday life as these 
basic arithmetical acts (perhaps more basic than arithmetic itself).

Regardless of what opinions one might have on the usefulness of including 
elementary facts from computer science in elementary school curriculum, there 
are already concrete and theoretical reasons for turning towards what is ever 
becoming an inevitable future.

Just as Russell found inspiration in Frege's concepts and Peano's notation, so 
have I found myself inspired by Goodstein's concepts and Iverson's notation.
N is a distillation of my inspired thoughts and beliefs.
It is my vigorous attempt at bringing out in the clearest and most concrete way 
possible the fundamental importance of Goodstein's methods in a language whose 
form carries said concepts as far and as fast as possible.

Goodstein's work invalidates much of modern mathematics, and does so by 
immediately dismissing the importance of the number concept.
For a mathematician familiar with the notion of number, this will sound 
heretical.
But much as Russel made his appeal to number as not simply a formal structure 
of which we might consider a given model, so does Goodstein appeal to the 
practical needs of our society and science for numeral over the past notion of 
number.
For Russell, number was an entry point into the investigation of the logical 
foundations of mathematics.
In an overly trite way he commits himself to fixation on the number concept by 
appealing to the modern reductions of mathematics to numerical concepts.
In particular he is referring to the use of the concepts of ordinals, 
cardinals, and the ubiquitous relations between them and upon which modern 
mathematics is built.
For Russell, and Frege, the notion of number is defined via the idea of a 
correspondence between classes of objects.
Using a one-to-one correspondence between two collection we are able to decide 
whether the "number" of each collection is the same or not.
From this notion of bijective correspondence we derive the abstract concept of 
number being that relation which is had by all classes having the same number 
as exhibited by a one-to-one correspondence between them.
The existence or non-existence of such universal relations, or of the relation 
concept in general, is something which is still disputed.

The dispute remains because of a problem which seems common to any question we 
might ask of mathematics: that there are no mathematical objects or things to 
which we are able to point.
If we could, for example, point to a function or a number as we do to dogs and 
cats, I would find it unlikely that such concepts would remain generally 
mysterious to the general public for I have yet to find a child that doesn't 
eventual become familiar with cats and dogs after a limited number of 
experiences with them.
Until such time as we can point to functions, we must not admit them without 
some reservation, for if they can be done without then so be it.

It is my opinion, and I believe one shared in part by Goodstein, that the 
supremacy and success of mathematical logic as a discipline has limited the 
general growth of mathematics as a part of human knowledge.
The speed with which we answered and asked so many interesting and open 
question using the tools of mathematical logic, most vigorously developed by 
Russell and Whitehead and later by Tarski and others, has not only benefitted 
mathematics as a whole, but has done some harm to those who's appeal to 
nonstandard foundations are seen as nothing more than trivial or misguided.
It is interesting to me that the lesson learned from Russell and Whiteheads 
Principia Mathematica was not that there must always be great need for the 
support of independence of clear and exact thought, but rather that the 
products of such minds are the only thing of immediate value to human knowledge 
as a whole.

Thankfully, Goodstein was able to find a way to produce his works, and I 
believe in them we will find a certain sort of modern salvation to the 
philosophical and political quarrels that separate foundations research from 
everyday mathematical behavior.
Anyone familiar with Hilbert's Program will find in Goodstein a number of 
surface level similarities between their philosophic outlooks.
Both Goodstein and Hilbert mention signs as being of a certain importance, 
though I am much more familiar with Goodstein's work than I am with Hilbert's 
in general and can not comment on the extent to which Hilbert's program is 
developed from a single conception of sign.
As does Hilbert, Goodstein dispenses with number and, technically, dispenses 
with numeral as well, declaring (after some informal arguments from an analogy 
between numbers and the game of chess) "the object of our study is not number 
itself but the transformation rules of the number signs".
It is the precise description of the transformation rules of the number science 
that gives them that universal quality which is so often attributed to the 
notion of number.
Though, in reality, it is because one can tack onto Goodstein's Primitive 
Recursive Arithmetic an additional structure of transformation rules for 
introducing a defined notion of counting that truly gives his system that 
universal quality so desperately desired from the classical number concept.

Unlike in Hilbert's case, there is a certain conceptual consistency between 
Russell's logical perspective and Goodstein's transformation rules for signs of 
arithmetic.
The specification of the transformation rules themselves are given in a 
particular symbolism, and, as such, are open to a certain amount of personal 
interpretation.

Ultimately, by using Iverson's notation and combining it with the concrete 
insights from Goodstein it is possible to demonstrate what may be an adequate 
foundation for modern mathematics that is not only simple to describe but also 
immediately useful to everyday mathematical investigation.

Of particular interest to me is the clear and exact description of how 
Iverson's notation relates to the construction in section 8.91 of Goodstein's 
Recursive Number Theory.
In it he gives a general method for constructing a linear ordering between a 
collection of functions (be they recursive, primitive recursive, or otherwise) 
that satisfy his transformation rules but, unlike the natural numerals, there 
are members of the linear ordering (e.g. f(t)=t) which exceed every constant 
function i.e. admitting natural numerals each of which has a unique successor 
but some of which can not be reached via repeated succession.
He then applies this construction to the primitive recursive functions in 
particular, and it is this act which inspires my interest in constructing N 
using Iverson's notation and Goodstein's formalism.
If there is a way to understand these things in the most concrete way possible, 
I believe it is through the completion of N and through its application to 
Goodstein's arguments, and other Relativized Hilbert Programs.

Beyond having these interesting foundational qualities, Goodstein has gone on 
to show that there is nothing ultimately limiting about these alternate 
perspectives on numerals that eliminates the development of mathematical 
analysis in a form quite different from what we are used to.
The ability for Goodstein's equation calculus to be used to investigate these 
foundational issues while still providing for a development of recursive 
analysis is something which should be followed through with greater vigor and 
generality.
That is what I wish to realize with N in a way that is also accessible to those 
whose desires are purely practical.




20151017T1322 My Program

What characteristics do we desire from a modern foundation of mathematics?

In the past, that there was a single system of thought adequate for unifying 
all of modern mathematics was a point of constant doubt.
Before the work of Russel and Frege there continued to be questions as to the 
merit of any such endeavor.
It was, in a social way, taboo to consider the unification of mathematics for 
it was a needlessly mystical sort of sorcery.
Practical minds were better spent toiling away at problems whose solutions 
seemed to almost present themselves, but which needed just a bit more attention 
from an able body.
Anything outside this realm of "immediate resolution" was seen as frivolous and 
wasteful.
A mind should not allow itself to be diverted by "pie in the sky" thoughts of 
conceptual harmony and unity.
Thankfully, Russell and the like found themselves persistent in their efforts 
to step out of the intellectual boundaries set out by their respective clubs of 
thought.

Unlike Russel, I am unable to collect that necessary measure of determined 
independence of thought without feeling the great push and pull of those around 
me who are committed to their conventional cause.
I spend a great deal of time wasting my efforts in seemingly pointless attempts 
to free my mind from my identifiably "lofty" thoughts.
This is a weakness of mine which I have tried to eliminate from time to time.
One day, I hope to be freed of such persistent doubts, but until then I remain 
stuck in a cycle of comformativity and creativity.

At this moment I feel the confidence to explore, once again, that part of my 
intellectual desires which I have been told are self indulgent and pointlessly 
decadent.
Having either mustered this courage, or simply having lucked upon the occasion, 
I will attempt to give a clearer picture of what hopes I have for myself and 
mathematics, and where it is that I wish these hopes and dreams to take me.

I return to the question which I've posed at the beginning of this clump of 
paragraphs.
What characteristics do we desire from a modern foundation of mathematics?
I have asked this question in such a way that it lends itself to this followup 
question:

Why should our desires have anything to do with the characteristics of a 
foundation of mathematics?

It is an unavoidable condition of our humanity (or perhaps just our times) that 
one can not commit time to a project which does not satisfy the desires of a 
relevantly powerful group of people.
Without the support of a person or persons who have adequate power to assure 
that the risk of failure does not interfere with the daily activities of a 
curious mind, it is almost impossible, except in rare herculean minds, to 
follow the facts to their unavoidable truths.
This state of affairs does more to hurt the world than to help it.
That the commitment of time and effort to a frivolous project of inquiry is 
impossible without the support of a potentially irrational group of powerful 
individuals is an ancient feature of human society, one which it seems 
unhealthy to support in our "modern" age.
And yet, even with all the advancements in technology, we have yet to learn of 
a method of general government which allows individuals of a particularly 
creative sort to give free reign to their intellectual dreams and desires.

I am not advocating an anarchic freedom from any intellectual or moral 
considerations, rather I wish for the institutions which support our ability to 
live together with our individual differences to be such that it does not place 
unnecessary restraint on those who may pave the way for a type of world more 
loving than the one we currently occupy.

As much as research requires economic support, there will always be a 
dependence upon a select group whose interests have brought them an above 
average sum of power.
A researcher who appeals to this group's want of power is likely to be given 
the resources needed to drive their personal interests forward, but, if at any 
point, they undermine the directives of their supporters, they will find 
themselves replaced by a more obedient researcher.

I do not know what it is that has made it so hard for myself to find refuge in 
some position of relative financial security so that I might free my mind from 
worries of future ruin long enough to progress in my intellectual projects.
It is hard for a mind driven to doubt to follow orders from above: "Have no 
respect for authority for there are always country authorities to be found."
That having such an impulsive sense of doubt, or "skepticism" as some 
carelessly call it, is so often received as a disrespectful attitude frightens 
me.
There are plenty of persons in history who have found blind obedience to be 
foolish, and I count myself among that group.

A person committed to justifying their existence based on their present 
performance is bound to hunt for only those problems which will be resolved in 
a "timely manner".
Anything outside the scope of an economic cycle is unlikely to find support 
amongst those who have the power to support such research in the first place.
I wish I knew more about the world and how people have survived in it.
Perhaps then I would be better equipped to find my place amongst these 
troubling times.

So, rather than speculate on that which I have little knowledge, I will push 
forward my project as far as I can with what little resources I have.

Again, I return to the initial question: What characteristics do we desire from 
a modern foundation of mathematics?
I ask this to discover what it is that would satisfy the greatest number of 
people and greatest number of mathematical facts that are familiar at this time.
For me, a modern foundation of mathematics is more a matter of design than it 
was in the past.
This is due primarily to the impossibility of producing a foundation in the 
first place.
Up until Whitehead and Russell's Principia Mathematica there was as yet no 
collective agreement that any one perspective, be it logical or mathematical, 
could address the large scope of mathematical inquiry throughout human history.
We live now in a world where there are a multitude of foundations for 
mathematics, some providing insights that others are blind to, and some which 
only a select few can even comprehend.
That a foundation of mathematics is a political tool is also something to be 
considered.
The seemingly innocent remarks made throughout mathematics departments about a 
prideful indifference to the current fad of foundations is something which 
should raise flags of concern not half hearted humor.

A foundation of mathematics is not meant to simply address the interests of an 
elite few capable of wasting their time over such tiresome minutia.
It is something which should interest the common person.
The mere suggestion that an everyday child should be impacted in any 
significant way by the existence or nonexistence of a specific foundation of 
mathematics will have you immediately labeled as an abject failure. 
Yet, that is exactly what I have found myself pursuing: a foundation of 
mathematics which is at once accessible to the general public, but which serves 
the sophisticated interests of professional mathematicians.

What is the impact that I would hope such a foundation would have on an 
everyday person: the inability to ignore the powerful convenience of clarity 
and exactness.
If a child can learn the tools needed to adequately understand the logical 
methods which have brought about our modern mathematics then what is to stop 
them from continuing to seek out such clarity in the social sciences?
From another perspective it can be seen that if the difference between a child 
understanding a foundation of mathematics and not understanding it is a matter 
of design then one would hope that we might seek other such accessible designs 
outside of strictly mathematical sciences.

There is, in my mind, an unavoidable connection between mathematics and the 
characteristics of clarity and exactness.
To the public, arithmetic is used as the principle analogy for describing what 
one means when they are "certain" of some fact or another.
An argument may use a statement like "no matter how you do it or think of it 
two plus two will not make five".





20151015T1038 More on the behavior of Base _ 

There are two conventions that "make sense" for the behavior of base.

The first is that the index of the numeric atom of a left argument should be 
the "place value" of that "digit".
From the polynomial perspective this means that the index of the coefficient 
gives the power of the respective variables of the term to which that 
coefficient belongs.

The alternate convention is that, knowing the shape of the left argument of 
"digits" (coefficients), the "place value" of an atom of the left argument is 
the shape monus the atom's index monus one (because indexing of elements starts 
at zero in N as in many other notational and programming languages).

The second convention follows more closely the finitists perspective: prior to 
performing some calculation on a table of numerals we already know the "limits" 
of the table of numerals.
In other words, prior to calculation we are certain we have all the data with 
which we will calculate.
Not wanting to squander this precious data, we use it for all its worth by 
using the convention that the most significant "digit" be placed at the 
beginning of a row-major representation of the coefficient table.
One need not appeal to a particular representation of the table (e.g. 
row-major), but could appeal to the extension of classical positional numeral 
systems where the most significant digit is also the "first" (or zeroth) digit.

It is my belief that, for now, the second convention is the most appropriate 
for N.
Appealing to the ultimate similarity between APL and N one can say that the 
information about the shape of a homogenous array of numerals is stored as part 
of that array's internal representation as an APL array (which is sometimes the 
name used to refer to the single data structure that is used to implement APL 
and J).

Another reason to prefer the second convention is because it follows the 
standard Positional Numeral Notation that we learn in elementary school and 
simply generalizes the idea to rectilinear arrays of digits rather than just 
lists.

So now for examples, to bring some concreteness to this all.
In order to make things clear I have to give examples of various dimensions of 
the left and right arguments.
Starting with the simplest and most familiar we have the standard 
representation of numeral in a positional numeral system to a given base:

In this first example the left argument is a one dimensional list of "binary 
digits" and the base is two

 1 0 1 1 1 _ 2
23

The second example is still to base 2, but the "digits" are now more familiar 
as "coefficients" of a polynomial.
As from the second convention described above, the power to which 2 is taken 
for each coefficient is equal to the length of the list minus the index of its 
"digit/coefficient" monus one:

 1 0 2 1 0 _ 2
26

Here it makes sense to take time to describe a specific way in which one might 
come to calculate this value using "more primitive" operations of N.

First, let d be the list of digits and b the base:

 d: 1 0 2 1 0

 b: 2

Now, let k be the "shape" or "length" of d 

 k:#d

 k
5

For now, # is shape, and it returns an atom when d is a one dimensional array, 
but returns a list when d is any higher dimension (this convention makes sense 
when you use # as shape in various calculations).

Now, we let x be the list of "place values" corresponding to each digit in d 
under the assumption that the base is b

 x: b pow $|k
16 8 4 2 1

 |5
0 1 2 3 4

 $ 0 1 2 3 4
4 3 2 1 0

 2 pow 4 3 2 1 0
16 8 4 2 1

Now, the terms t of the polynomial are

 t: d*x

 t
16 0 8 2 0

 1 0 2 1 0 * 16 8 4 2 1
16 0 8 2 0

So that the decimal numeral corresponding to 1 0 2 1 0 _ 2 is the sum of the 
terms:

 +/t
26

 +/ d * b pow $|#d
26

Note that this is only one method of calculating this value (and it is 
certainly not the most efficient).
Though it is the classical method of setting up a polynomial term by term and 
calculating all of the information in pen and paper steps.

Now for the calculation of a multi base numeral i.e. a multivariable polynomial.
As is readily seen in Birkhoff and MacLane's Algebra there is an obvious 
morphism between permutations of iterated constructions of polynomial rings 
from the same commutative ring.
Furthermore there is a morphism between the a multivariable polynomial ring 
over a commutative ring and iterated constructions of single-variable 
polynomial ring over a commutative ring.

All of these concepts are subsumed under symmetries in the left argument to 
base via the relevant transformations between the cells and frames of a 
multidimensional array of coefficients.

Using the second convention of base described above, the entries of the 
coefficient matrix are associated with the "place value" whose index is the 
shape of the coefficient matrix (digits) monus the index of the coefficient 
monus one.

Alternatively, using digits |, one can build an array whose i,j-th entry is the 
index i,j (i.e. a list whose 0-th item is i and who's 1-th item is j).
Supposing the shape of the coefficient matrix is 2 3 e.g.

2 4 6
3 5 7

Then the array corresponding to the place-values (powers) of the two base 
numeral (two variable polynomial) is given as a brick from the following 
calculation

 2 3| 2 3# $|2*3
1 2
1 1
1 0

0 2
0 1
0 0

 2*3
6

 |6
0 1 2 3 4 5

 $0 1 2 3 4 5
5 4 3 2 1 0

 2 3# 5 4 3 2 1 0
5 4 3
2 1 0

 2 3| 5 4 3; 2 1 0
1 2
1 1
1 0

0 2
0 1
0 0

This can seem confusing, but it ultimately isn't (it's just that the details of 
these elementary parts of arithmetic and polynomials are scattered about 
through modern text books under the guise of polynomial functions and crazy 
algebra and an often mystical invocation of ad hoc modular arithmetic).

Specifically, it is the purpose of these operations (digits | and base _) to 
lift the vale from this often misunderstood relation between digits, bases, 
place-values, numerals, and polynomials.

Before going to far into the detail of a specific way of calculating _ for 
array arguments let me give a concrete example.
In this example d is an array of digits (or coefficients for polynomial people).

 d: 2 4 6; 3 5 7

 d
2 4 6
3 5 7

 #d
2 3

The shape of d (denoted #d) is 2 3 i.e. it has two rows and three columns.
For our bases (in polynomial language this would be the values of the variables 
x and y in a bivariable polynomial) we choose 3 and 5 (primes just to make life 
easier for now).

 b: 3;5

 b
3
5

 #b
2 1

Here the shape of b (denoted #b) is 2 1 i.e. it has two rows and one column.
We now visualize the computations to come as follows:

  25 5 1
 +-------
3| 2 4 6
1| 3 5 7

The first axis of the coefficient matrix (digits) d is the row axis and along 
this axis we list the powers (place values) of the first base along the first 
axis of b (3) and along the second axis of the digits (the column axis in the 
case) we list the powers of the second base along the first axis of b (5).

Now the place value of each coefficient (digit) is given by the corresponding 
value of the power of each base in the digit's row and column.
Each row of the following table contains the factors of each term in the 
polynomial with coefficients d and arguments b

3 25 2
3  5 4
3  1 6
1 25 3
1  5 5
1  1 7

For me it is easier to see the place values next to the coefficients (as this 
is how it properly generalizes, though some people might prefer seeing the 
coefficients to the left it is easier to see that the place-values are ordered 
in a reverse lexicographic order):

1 2 2
1 1 4
1 0 6
0 2 3
0 1 5
0 0 7

For those familiar with thinking about sparse matrices, this table can be 
interpreted as as a representation of a space matrix, the remainder of whose 
entries are zero.

Now, to complete the calculation we can sum the column of products across each 
row

 +/ */' 3 25 2; 3 5 4; 3 1 6; 1 25 3; 1 5 5; 1 1 7
235

 */' 3 25 2; 3 5 4; 3 1 6; 1 25 3; 1 5 5; 1 1 7
150
 60
 18
 75
 25
  7

 +/ 150; 60; 18; 75; 25; 7
235

Having performed the computation for one instance, we can now go back and see 
what the general form of this method of calculation takes, and how it is 
similar to the expression for the lower dimensional case:

+/ d * b pow (#b)# (#b)| $|*/#d

Again this is a notebook and as such is not composed of complete thoughts.
The previous N expression is a complete description of the behavior of base _ 
on any array of digits and bases.





20151014T1203 From same to similar ~ and [m f n] bracket notation

My original purpose in naming the dyad ~ same was because "same" is a four 
letter word where as "similar" is a seven letter word.
I now prefer similar regardless of the three additional letters needed to write 
it in English.
It has not been my intent to assume that the user of N speaks English, nor that 
ASCII is the only characters to which they have access (though it is still not 
wise to assume any specific character set as there is unlikely to be any 
universal agreement even among standards committees as to what constitutes a 
likely or "final" character set, perhaps the notion of character sets will end 
up being nothing more than an archeological or anthropological topic, even 
geological!).
I had considered, for a time, developing pronunciations of each verb that 
appeal to a sort of Pidgin English, something like that spoken in certain parts 
of the world, but have since ignored that path of inquiry as it did not help me 
while I was using and developing N.

The use of the word same was needlessly limiting.
Similar is the word which I had originally intended to describe the concept 
embodied in ~ but it seemed vague.
I have since accepted that it is the simplicity of "similar"'s vagueness that 
is a strength to the over all design of N.
One of the emerging properties of N is the use of ~ as the single "judgement" 
operation.
That is, the only judgement that one must make is one of similarity between a 
left and right argument.
My reasons for this are philosophical, logical, and mathematical.
The mathematical reasoning is embodied in my use of the verbs < = > as 
arithmetic operations rather than as relations between numerals.
The logical reasoning is embodied in Goodstein's equation calculus.
The philosophical reason is based on Russell's Human Knowledge.
Jokingly, but most certainly not, Russell makes the audacious suggestion that, 
with proper provisions, one might confidently construct a theory of human 
knowledge whose principle relation is that of similarity.
He goes on to describe how the relation of similarity is one between events, 
and that with proper axioms (in the philosophic sense not in the mathematical 
logic sense) one can reclaim the reasoning that is characteristic of hard 
sciences such as physics (primarily the reasoning by induction from the facts).
What I'm doing is far removed from Russell's grand scheme, but Goodstein's 
equation calculus uses the single judgement of "equality" to develop Number 
Theory and Recursive Analysis.
Thus showing that Russell's outlandish simplification might actually be less 
outlandish than it seems at first glance.

My want to name ~ "similar" is also born from an idea that I had which would 
allow users of the notation to redefine ~ as it suits their current content.
Interestingly, there is no need for this, as it is clear that by using 
Goodstein's equational calculus all relevant judgements are actually special 
instances of a similarity judgement.
In Goodstein's case the most interesting part of the similarity judgment is 
characterize by his primitive recursive uniqueness rule:

"
the 
primitive recursive uniqueness rule

U
F(Sx) = H(x, F(x))
 F(x) = H^x F(0)

where the iterative function H^x t is define by the primitive recursion H^0 t = 
t, H^Sx t = H(x, H^x t)
" Goodstein, Recursive Number Theory, pg. 104

together with the standard features of similarity: transitivity, 
Indiscernibility of Similars (my  generalization of Leibniz's Indiscernibility 
of Identicals)

In N notation transitivity of similarity is encoded in the rule:

 x ~ y
 x ~ z
-------
 y ~ z

Indiscernability of Similars is encoded in the rules:

    f ~ g
-------------
 f[x] ~ g[x]

    x ~ y
-------------
 f[x] ~ f[y]

Although, in the rules just given, there is a certain "favoritism" given to the 
difference between a constant and a constant function, as I would prefer these 
be written as:

    x ~ y
-------------
 x[z] ~ y[z]

    x ~ y
-------------
 z[x] ~ z[y]

The idea of "Indiscernability of Similars" is that once you accept a similarity 
you are committed to an inherent vagueness between similar i.e. it is not 
possible to "distinguish" between similars under  given similarity relation.

In these rules I've used the as yet well defined/described "abstraction" 
notation [].
In classical computer science (and in some mathematical settings) the square 
brackets are used in so called "index notation".
For example, if A is an array and we wish to refer to the item of this array 
located in the first row and the second column (assuming zero indexing i.e. we 
start by labeling with zero rather than one) then we would write his as A[0,1] .
In N, we would write the same thing, but instead of interpreting A[0,1] as an 
ad-hoc abbreviation it fits into a larger framework.

Here A is some function (in this case we believe it to be a function or verb 
which when given a pair of numeral arguments returns the relevant entry in a 
rectangular array) and A[0,1] is a claw of A with the constant function [0,1].
Thus for any arguments m and n we have

A(0,1) ~ m A[0,1] n

for the following are similar:

m A[0,1] n
A m [0,1] n
A 0,1
A(0,1)

Inside of the square brackets the letters m and n take on special meaning (they 
are bound to their enclosing pair of brackets) so that, for example:

3 [m+n] 5
3 + 5
8

or

3 [n*3] 5
5 * 3
15

or

3 [m-3] 5
3 - 3
0

3 [A 2,n,m] 4
A 2,4,3

or

A[3,n] 4,5
A 3,4,5

or

A[n,4,5] 2,3
A 2,3,4,5

so that any train or claw of verbs can actually be written using explicit 
constructions with [].

As with any standard lambda functions we can specify what are the pronoms to be 
"bound" in a given brace expression:

3 [[x y] x+y] 5
3 + 5
8

3 [[y x] x-y] 5
5 - 3
2

3 [ [3+n] n + m] 5
[3+n] 5 + 3
[3+n] 8
3 + 8
11

The last example shows that, for now, the pronoms m and n are bound to their 
immediate square bracket expression and any inner or outer occurrence of that 
pronom inside a nested bracket or outside are unaware of each other.

What if there are more than three pronoms in the argument list of a bracket 
expressions?

[[x y z] x + y + z] 3,4,5,6,7
3 + 4 + 5
3 + 9
12

3 [[x y z] x + y + z] 4,5,6,7
3 + 4 + 5
3 + 9
12

3 4 [[x y z] x + y + z] 4,5,6,7
3 4 + 4 + 5
3 4 + 9
12 13

3 4 [[x y z] x + y + z] 4 5 6 7; 8 9 10 11
3 4 + 4 5 6 7 + 8 9 10 11
3 4 + 12 14 16 18
(3 + 12 14 16 18); 4 + 12 14 15 18
15 17 19 21; 16 18 20 22

15 17 19 21
16 18 20 22

For now, these examples are representative of the referral of argument pronoms 
to the corresponding items of the right argument.
In the future it might seem reasonable to allow the specification of items of 
the left and right arguments e.g.

3 4 5 [[[x y] u v] u + v + y,x] 5 6 7
5 + 6 + 4 , 3
5 + 6 + 4 3
5 + 10 9
15 14

Though, the notation becomes a lot less expressive at this level and it is 
likely that your function definition (whatever it might be) is better suited to 
breaking into pieces.

f:[n,m]
3 4 5 [[x y z] y + z + f x] 5 6 7

This idea of breaking things into pieces using proper pronom definition might 
be taken so far as to eliminate the nesting of bracket expressions.
This is a restriction that I will consider, but for now I see no deep reason 
for eliminating it outright.

note that 

[n,m] 3
3,3
3 3

[n,n] 3
3,3
3 3

[n,m] 3,4,5
4,3
4 3

[n,m] 3 4 5;6 7 8;9 10 11
6 7 8, 3 4 5
6 7 8 3 4 5

So there's that too... you have to think in terms of items be they items of an 
array or items that have been "linked" together.
Needless to say, the only place to find order amongst these many different 
types of interpretations is in the well worn paths of mathematics.





20151013T1516 What's been going on with N? Introducing rod | and poly $ and 
Eliminating meet and join.

The past week has been full of new developments in my life and with N.
I've been admitted into the wonderful non-profit program run by LaunchCode.
That process took a considerable amount of time and attention, and the results 
were more than satisfactory.
Consequently, I have dedicated little time to working directly on the 
development of N, but that has not stopped my brain from continuing to crunch 
away at it as I'm thinking about something else.
As a result there have been some major changes in the basic vocabulary and the 
full organization of N, its development, and its documentation.

First, and perhaps most importantly, I've realized that the act of "modulus" as 
it is usually called in the classical computer science languages has a well 
defined meaning in the multidimensional case i.e. when there is a list of 
numerals as its left arguments.
Furthermore, this conceptual extension is harmonious with the basic elements 
needed to evaluate polynomials (something which I was already going to make 
primitive, but which now has a new balance).

These new verbs are the dyad rod | and the dyad $ poly.
Though, having just written that, I think that the elimination of the & and 
| as meet and join are to thank for opening my mind to these new definitions.
What I've done is to realize, in a more decisive way, the interplay between 
Goodstein's work in Recursive Number Theory and the methods a programmer might 
use to work with propositions and conditions, or, in general, relations between 
numerals and basic propositional logic.

To be clear: the binary operations of * (times) and + (plus) are more than 
simply adequate for use as models for the logical versions of join and meet, 
but introduce a much needed deviation from common habits: that of using 0 to 
represent False and 1 to represent True.
As is common in the history of human discovery, the use of 0 and 1 to represent 
false and true is as silly as the use of positive charge and negative charge to 
describe a quantity related to electrons and protons.
Unlike in the case of physics, there is no loss in abandoning common 
convention, especially when the resulting simplicity is intoxicatingly 
convenient and reinforces the larger system of thought that N embodies.

Though it may never become popular, the interpretation of true as 0 and false 
as 1 lends itself to another common generalization: that of "shades of 
falsehood".
Beginning with binary arithmetic it is obvious that * is what one is compelled 
to interpret the logical operation of "or" with:

* 0 1
0 0 0
1 0 1

From this table it becomes obvious that x*y is false only when both x and y are 
false i.e. x*y is true only when x is true or y is true.
Furthermore:

+ 0  1
0 0  1
1 1 10

and

*+ 0 1
 0 0 1
 1 1 1

That is, the claw sgn plus is a way of forcing a purely classical result from 
what classical logicians desire in an operation of "and".

Most people are more inclined to accept the interpretation of true as being 0 
if false is interpreted as being greater than zero.
Note, that up to this point the notion of true and false are being interpreted 
using the general judgement of ~ (same) i.e.

0 ~ x

means "x is true" and

1 ~ * x

means "x is false".
Notice that this is just one example of a family of problems whose general 
question is:

"What logical statements are reducible to the relation of sameness between 
primitive recursive functions?"

The most thorough answer of which is, as you might anticipate if you've 
followed any of my work here, to be found in Goodstein's works Recursive Number 
Theory and Recursive Analysis.
Though, it must be noted, that it is not Goodstein's goal in either of those 
works to establish the supremacy of his equational calculus in answering the 
call of "logical analysis" rather that modern mathematics is satisfactorily 
encompassed by a system of notation that is surprisingly simpler than anything 
which we have yet encountered.

Though it is clearly my intent that such questions be thoroughly answered as a 
result of N's continued development as a notation language, there is the more 
pressing matter of getting the right mathematical tools in the hands of the 
largest number of people possible.
The right tools are those which reflect our modern knowledge of math and 
science (in particular computer science).

There is much more to be said about this all, but for now such abstract 
concepts are not relevant to the immediate development of N as a powerful tool 
for thought.

The use of the word "rod" to describe | is based not only on its appears as a 
rod, but also on its function as it relates to the rods of a generalized abacus.

In the normal operation of an abacus there are a collection of parallel rods 
each of which has the same number of beads upon it.
It is from this ingenious machine that the modern positional notational systems 
and the standard arithmetic algorithms are born.

Suppose you wish to find the remainder of 210 divided by 10.
One method which will produce the desired solution is to count to 210 using a 
standard abacus where each rod has nine beads on it.
Counting to 210 on such an abacus seems silly because one can easily represent 
the base ten numeral 210 on an abacus by simply producing the following 
configuration:

===============
  o    o    o  
  o    o    o
  o    o    o 
  o    o    o 
  o    o    o 
  o    o    o 
  o    o    o 
  |    o    o 
  o    |    o     
  o    o    |
===============
  2    1    0

The remainder of 210 divided by 10 is 0 which can be read from they right most 
rod of the abacus.
Now, suppose you wish to find the remainder of 12 divided by 7 then you could 
count to twelve on an abacus having six beads to each rod i.e. a base 7 abacus:

==============
  o    o    o 
  o    o    |
  o    o    o 
  o    o    o 
  o    o    o 
  o    |    o     
  |    o    o
===============
  0    1    5

Again, using the right most rod we read off the remainder as 5.
In N these actions would be written:

 10|210
0
 7|12
5

This process works in general for finding the remainder of dividing any two 
numerals (as long as you agree to count the number of beads on the right most 
rod using a decimal system).

The generalization where the left argument to | is a list of numerals should 
now seem obvious: it is the resulting down beads on an abacus each of whose 
rods has the specified number of beads.

For example to find 2 4 | 15 we use the following abacus:

==========
  o    o 
  |    o 
  |    o     
  |    |
==========

Just as when we count with an abacus having a finite number of rods, we pretend 
to perform a carry but have no way of recording it, so that information is lost 
in the process of counting, but in this generalization of the rank one | we do 
not require that information.
So counting out fifteen on this abacus gives the final position:

==========
  |    | 
  |    o 
  |    o     
  o    o
==========
  1    3

So that,

 2 4 | 15
1 3

That is, 2 4 rod 15 gives 1 3 (full English "two four rod fifteen gives one 
three").
As a consequence of this convention for rod | one can now easily bridge the gap 
from numerals to lists of numerals e.g.

 10 10 | 15
1 5
 2 2 2 2 2 2 | 101100
1 0 1 1 0 0
 2 2 2 2 2 2 | 44
1 0 1 1 0 0

Since the dyad | is derived from the remainder of Euclidean Division the 
convention is that 0 | x should return x so that 0 2 | x should return the 
quotient of x divided by 2 followed by the remainder of x divided by 2:

 0 2 | 3
1 1
 0 5 | 13
2 3

Thus, going back to our use of abaci, the use of 0 in the expression 0 5 | 13 
indicates that the rod has an infinite number of beads that can be brought down 
from the rod second from the right.

In J the operation called rod here is named anti-base #: and its partner is 
base #.

In N the name rod is used just as a place holder for what I hope will be a 
better name than "anti-base".
The use of the word rod is catchy though as it describes both the symbol | and 
the origin of its behavior in reading off the rods of an abacus.
This is one of those design coincidences that might be called a fitting 
surprise, as such it may be that rod ends up being the final name for the 
operation | .

The further generalization of rod | to a left argument being a rank two array 
or an array of higher rank will have to wait until I know more about polynomial 
operations and matrices/tensors.

For now, the operation of "base" from J is called poly for polynomial and 
denoted $.
The reason for choosing $ is because it contains rod, but weaves together all 
of the beads (giving the value of the polynomial with given coefficients).

To calculate 1 0 1 $ 2 we must start with a three rod abacus having one bead on 
each rod

=============
 o    o    o 
 |    |    |
=============

Then we count with this abacus until we reach the final configuration:

=============
 |    o    |
 o    |    o
=============
 1    0    1

Which is analogous to using the abacus to count down from a starting position 
and accumulating the relevant number of beads or objects.

Though, this is not useful when the list of numerals on the left contains 
numerals that are beyond the numeral on the right e.g. 1 3 2 $ 2 .
This suggests that what we are really doing is starting with the same abacus

=============
 o    o    o 
 |    |    |
=============

but before counting backwards we count out as follows


      =============
       o    o    o 
       |    |    |
      =============
-------------------- count 2: start with the right rod (since 2 is the right 
numeral in 1 3 2)
      =============
       o    |    o 
       |    o    |
      =============
-------------------- count 3: start with second rod from right rod in 
antecedent configuration
 ==================
  |    o    o    o
  o    |    |    |
 ==================
-------------------- count 1: start with third rod from right
 ==================
  |    |    o    o
  o    o    |    |
 ==================
  1    1    0    0

In decimal notation this is the configuration of a binary abacus that was used 
to record a count of 12.

Why does any of this matter?
That is a good question to ask, because as I have introduced them here, these 
operations do not seem to have much connection to anything that you would 
"really want to do".
In other words, its seems like a fun game to play when using an abacus to 
relate our method of writing out numerals with actual counts of a collection of 
objects, but it doesn't seem like the type of thing that one would be naturally 
led to consider.
To support this dismissive position, one can consult Goodstein's Fundamental 
Concepts of Mathematics and see that the ideas are not introduced until a 
little less than half way through his 319 page book.

The occurrence of these concepts at the middle point of his book is not a 
coincidence, it is the turning point which separates "elementary" mathematics, 
the type you are used to doing up till high school, from "higher" math.
The concept that ties together these operations | and & (which is the new 
symbol for $).

Sorry I must take a moment to remind anyone reading this of two things: this is 
a notebook, it is not a blog or collection of well thought out posts, it is a 
notebook i.e. a record of the flow of thought.
It is, consequently, subject to the type of orthogonal divergence that is 
characteristic of a relatively creative mind.

So, the notation | is now called digits and & is called base.
This follows Godostein's convention and also gives the antithesis of the binary 
operations | and & that is found in their classical use as synonyms for the 
logical operations of | and & (which as I've already said many times before 
do not have a place in the primitive vocabulary of N).

Also, & is like | only all wound up and looking kind of like a bag.

Back to the regular flow of thought.

The introduction of the polynomial concept is a pivotal one, it is where 
algebra and analysis get interesting.
Polynomials, as introduced by Goodstein and as introduced here, are a way of 
introducing new numbers to a preexisting collection of numbers.
Given a collection of numbers, say the rational numbers (which one can 
represent as triples like 1n3r5 or 4r5 in N), we can form new numbers as lists 
of rational numbers.
These lists are written down formally using a formal base such as the symbol 
'x'.
The construction of one system of numbers from another is only "natural" after 
one becomes more familiar with repeated use of an "arithmetic of pairs" to move 
from natural numbers to integers, from natural numbers to fractions, from 
integers to rational numbers, from rational numbers to the rational numbers 
together with rational multiples of the square root of two.
That there is a general method which ties together some of these "arithmetics 
of pairs of numbers" is the result of years of human investigation, 
imagination, and creation.

For now, most kids are introduced to polynomials as functions, and their 
algebraic nature is irreversibly mixed in a vague and ultimately unhelpful way 
with their analytic nature.
The interpretation of polynomials as numbers to an as yet defined base would be 
unfamiliar to most high school students familiar with polynomials.

There is much more to be said to complete this narrative and to explain why 
these operations are unavoidably necessary in a notational language for math, 
science, and society: they are currently the best way to deal with a large 
class of mathematical descriptions without introducing anything more than lists 
of numbers.





20151012T1234 Even more draft material from N.html

There have been a lot of changes to the basic vocabulary and structure of some 
important N notation which has required the removal of most of the work that 
has been done previously.
One thing is certain: the language is progressing quickly and continues to 
settle into well worn fundamentals.

Examples

        0 ~ *x + 6*x + 7
 (0 + -7) ~ *x + 6*x + 7 + -7
      0n7 ~ *x + 6*x + 7 + 0n7
      0n7 ~ *x + 6*x + 7n7
      0n7 ~ *x + 6*x + 0
      0n7 ~ *x + 6*x
        3 ~ 6%2
        9 ~ *3
(0n7 + 9) ~ *x + 6*x + 9
      9n7 ~ *x + 6*x + 9
     *x+3 ~ *x + 6*x + 9
      9n7 ~ *x+3
        2 ~ *x+3

+/!n ~ n-1 * n % 2
+/ !1+n
+/ !n , n
+/!n + n
n + +/!n
n + n-1 * n % 2
(2*n + n-1) * n % 2
2+n-1 * n % 2
1+n * n % 2
1+n * 1+n-1 % 2






20151008T2105 More draft material from N.html that no longer belongs there

f~g
-----
f~g n

m~n
-------
f.m~f.n

i~j
i~k
---
j~k

f`S~I g f
---------
f~g^I f Z

(g^I)`Z~L
(g^I)`S~g^I g I

+`Z~I
+`S~S+

P`Z~Z
P`S~I

-`Z~I
-`S~P-

*`Z~Z
*`S~+*`I

Calculate and compute anything anywhere with N.

What is N?
N is new.
N is fast.
N is clear.
N is simple.
N is natural.
N is computable.
N is interactive.
N is highly parallel.
N is thought provoking.
N is a notational language.

N makes math easy.
N leverages our language instinct to transform vague intuition into practical 
exploration.
Decimals replaced Roman numerals: N replaces ancient notation for calculating.
Use N on a black board, on a napkin, or on a computer.

No operator precedence: evaluate from right to left.

But what about "My Dear Aunt Sally"?
The classical notion of "order of operations" is an ancient habit as odd as 
using Roman numerals for arithmetic.
Its elimination gives simplicity, clarity, and generality to any algebraic 
expression.
It also eliminates the age old headaches of "what do I calculate first?"

Why would they teach order of operations in school if you didn't need it?
For the same reason the Roman's taught their children to use Roman numerals.

There has to be a catch! You must use a lot of parenthesis.
No catch, and no.

It must be like learning to read hieroglyphs!
It's not.

So you're just trying to make a "standard notation" for doing math with 
computers?
No.
We already have tons of standards and lots of notation.
N is a perspective on calculation and computation in math and science.
Its notation is a consequence of its perspective, not the other way around.

But, I don't get it.
Take a few deep breaths.
I'm making things simpler every day.
Soon you'll see the whys-and-hows at-a-glance.

Wait, isn't N just a flavor of APL, J, or k?
No.
k is a programming language.
N is a notational language.
The fact that N might be used to program computers is a consequence of its 
purpose, not its purpose.
k is for computation.
N is for calculation.
That computation is a type of calculation is surprisingly hard to prove (and in 
fact we often just assume tacitly there is a correspondence between computation 
and calculation).
N should and can be used anywhere: on paper, on a chalkboard, on a whiteboard 
etc.

But, can't you write k programs on a napkin if you wanted?
Yes.
You can write k programs by hand without worry because both k and N derive 
their notational conventions from Iverson's APL and J.

So then really, why should I be interested in N rather than k?
N is for calculations of all kinds: k is for computing.
Ultimately you can compute easily and efficiently with N, but its design is 
guided towards the fundamental limits of calculation with notation.

I still don't understand what the difference is between calculating and 
computing.
You seem to be making up the distinction without justification.

Try programming a computer to do algebra or calculus, you will discover that 
what is easy to calculate is not always so easy to compute.





20151008T1244 N and Accessibility
If a person can't understand your product then they will have no reason what so 
ever to want it.
They might want it, they might even need it, if they only knew what it was or 
what it did for them.
While it's not always a great idea to start by thinking "how can I convince 
people that they want my product" you have to put serious effort into answering 
this question in a way that is not only effective but also commensurate with 
your ethical outlook.
Some people are driven to manipulate people using whatever means are easily 
available to them.
This can mean doing simple or silly things, often involving appeals to our 
primal nature e.g. sex.

A single sentence description of your entire product and its purpose that 
captures all that it might be to people, and more, is what is needed to be 
successful in these things.





20151006T1032 Reinstalling J
Before upgrading my mac to El Capitan I decided to clean my computer and start 
from scratch.
I made sure that not only had I backed up all of my necessary files (to Google 
Drive), but that I also appended the results of ls -a in my applications 
directory.
I just realized that I had not yet reinstalled J (more specifically Jqt).
It is very easy on a mac (and on all other major OS):
http://code.jsoftware.com/wiki/System/Installation/QuickStart





20151005T1909 Residue v. Remainder

The dyadic verb ! has been remainder or what some computer scientists and 
programmers might call the mod operation.
There are a variety of definitions amongst mathematicians and computer 
scientists as to what constitutes a mod or remainder operation.
The classical remainder operation is usually extracted from the classical 
Euclidean algorithm for division.
The statement of the relevant theorem is as follows:

Euclidean Division
Given numerals a and b with 1 ~ @ b (one same signum of b i.e. b greater than 
zero) there exists unique q and r such that

a ~ r + b * q
r ~ r > 0
r ~ r < b - 1

where a is the dividend, b is the divisor, q is the quotient, and r is the 
remainder.

From Goodstein's RNT the remainder and quotient are defined via recursive 
functions satisfying the relevant logical relations between dividend, divisor, 
quotient, and remainder from Euclidean Division.

"
The notions of quotient and remainder are introduced into recursive number 
theory by means of the recursive functions Q(a,b) and R(a,b) which we define as 
follows:

To simply the formulae we write alpha(c,d) for alpha(|c,d|) so that 
alpha(c,d)=0,1 according as c,d are equal or unequal, and define

 R(0, b) = 0
R(Sa, b) = S R(a, b) * alpha(S R(a, b), b)

and

 Q(0, b) = 0
Q(Sa, b) = Q(a, b) + (1 - alpha(S R(a, b), b))

That these functions have in fact the required properties is shown by the 
following formulae:

a = b * Q(a, b) + R(a, b)
b > 0 -> R(a, b) < b
((a = b * c + r) & (r < b)) -> (c = Q(a, b) & r = R(a, b)).

" Goodstein RNT Chapter IV pg. 86

Translating this to the relevant N operations:

  (0 ! b) ~ 0
(a S`! b) ~ (a S`! b) * (a S! b) @= b

  (0 % b) ~ 0
(a S`% b) ~ (a % b) + 1 - (a S% b) @= b





20151005T1849 N:Complete The Square Draft Design

This is just draft design work to see how it looks to put in all the 
excruciating details using the current conventions of N.

        0 ~ *x + 6*x + 7
 (0 + -7) ~ *x + 6*x + 7 + -7
      0n7 ~ *x + 6*x + 7 + 0n7
      0n7 ~ *x + 6*x + 7n7
      0n7 ~ *x + 6*x + 0
      0n7 ~ *x + 6*x
        3 ~ 6%2
        9 ~ *3
(0n7 + 9) ~ *x + 6*x + 9
      9n7 ~ *x + 6*x + 9
     *x+3 ~ *x + 6*x + 9
      9n7 ~ *x+3
        2 ~ *x+3

The previous derivation is done without the use of parenthesis using a more 
subtle "space" convention: symbols which are immediately adjacent to each other 
are "grouped" into a parenthesis enclosed expression when you consider 
evaluation from left to right.
For clarity I decided not to use the following conventions:

0+-.7 ~ 0 + -7
0n7+9 ~ 0n7 + 9

The reason for this "grouping with space" is because it mirrors exactly the 
methods used in standard written notation throughout the world: space is used 
to separate words and one can think of 0n7+9 as a sort of German compound 
noun-verb i.e. a single world referring to its result under classical 
evaluation (remember that 0n7 is the pair "zero negative 7" or "positive zero 
negative seven" though the latter is unlikely to be used with much frequency).

The same argument with parenthesis instead of spaced grouping:

        0 ~ (* x) + (6 * x) + 7
 (0 + -7) ~ (* x) + (6 * x) + 7 + -7
      0n7 ~ (* x) + (6 * x) + 7 + 0n7
      0n7 ~ (* x) + (6 * x) + 7n7
      0n7 ~ (* x) + (6 * x) + 0
      0n7 ~ (* x) + 6 * x
        3 ~ 6 % 2
        9 ~ * 3
(0n7 + 9) ~ (* x) + (6 * x) + 9
      9n7 ~ (* x) + (6 * x) + 9
(* x + 3) ~ (* x) + (6 * x) + 9
      9n7 ~ * x + 3
        2 ~ * x + 3





20151004T1900 A formalization of the notational language N 
Goodstein, in his Recursive Number Theory, builds an equational calculus for 
primitive recursive functions.
That is one way of describing his work.
Another is that his calculus is a description of the necessary relations 
between any notation for doing basic arithmetic, algebra, and number theory.
Without extension he turns Recursive Number Theory into a powerful analytic 
tool in his Recursive Analysis.

By closely studying the formal system within which his equation calculus is 
developed, and identifying how it is that such a simple collection of premises 
weave to form much of modern mathematics, we gather insight into not only the 
foundations of mathematics, but also the foundation of any notation needed to 
work with math in a "user friendly" way.

It is possible, and often undertaken as a recreational activity, to develop 
most of formalized mathematics in purely esoteric notational languages which 
are built for the sole purpose of confusing and confounding the user.
Many see such things as fun puzzles.
On the other hand, they are proof that it is always possible to function with 
even the most inadequate and headache inducing notation.
There is no reason for believing that our notation is not equally headache 
inducing, or might be to some future generation of mathematicians and computer 
scientists in hindsight.
What can be done to know with some level of certainty that the notation 
conventions we've acquired from the past and present are not needlessly baring 
us from as yet unseen insights or ease of use?

The study of language is old, and the study of formal language is new.
The formal study of formal language is very new, being 100 (or so) years old 
(one might say that axiomatics as a mathematical discipline was introduced most 
concretely in Hilbert's Grundlagen der Geometrie published in 1899, or perhaps 
modern formal language study began with Hilbert's Program).
One thing which seems to be similar amongst most formal languages is that they 
derive their form from some logical discipline.
Prior to the construction of a formal language their is often a discussion of 
the logical framework from which the formal language is built and inspired.
Kleene's presentation in his Metamathematics is interesting in that the first 
part is dedicated to an examination of the use of mathematical logic within 
modern set theory and modern mathematics, and then he embarks on a presentation 
of a formal language without building it from his earlier monologue on 
mathematical logic in general.
He does this in order to send a clear message to the reader that the 
metamathematical discipline is often significantly different from classical 
mathematical arguments.

The largest question, and that question which continues to plague modern 
mathematicians, is what principles are admissible in constructing something as 
primitive as a metamathematical argument.
There are certain things that must be communicated or transferred to the reader 
in order to get across a metamathematical argument, and it is not always clear 
what things these are.
The most frequently used methods are those of elementary arithmetic or 
elementary number theory.
That is, from the notion of number (or numeral) we are inspired to accept 
certain principles prior to the development of any formal system.
One might go so far as to say that it is the notion of number which has 
inspired all modern mathematics and science.

With Goodstein's work we have a simple system for framing these questions in a 
clear and exact way.
It is only half way through his book "Recursive Number Theory" that a 
formalization of his primitive recursive arithmetic is given.
This is because prior to the formalization he develops the primitive recursive 
arithmetic using only an informal statement of each of the principles that are 
finally formalized in his system R.
Though his method of presentation is informal, it is only slightly so: each of 
the arguments are made using a clear and distinct description of what actions 
are permissible and which are not.

The formalism of Goodstein's Primitive Recursive Arithmetic is as follows (in 
Goodstein's words)

"
the only axioms are explicit and (primitive) recursive function definitions, 
and the only inference rules are the substitution schemata

Sb1:
F(x) = G(x)
-----------
F(A) = G(A)

Sb2:
   A = B
-----------
F(A) = F(B)

T:
A = B
A = C
-----
B = C

where F(x), G(x) are recursive functions and A,B,C are recursive terms, and the 
primitive recursive uniqueness rule

U
F(Sx) = H(x, F(x))
 F(x) = H^x F(0)

where the iterative function H^x t is define by the primitive recursion H^0 t = 
t, H^Sx t = H(x, H^x t); in the schema U, F may contain additional parameters 
but H is a function of not more than two variables.
In Sb1, the function G(x) may be replaced by a term G independent of x, 
provided that G(A) is also replaced by G.

...

the defining equations for these operations to be:

a+0=a , a+Sb=S(a+b);
0-1=0 , Sa-1=a;
a-0=a , a-Sb=(a-b)-1;
a*0=0 , a*Sb=a*b+a
" Goodstein Recursive Number Theory pg. 104 and 106

He redescribed the formal system on the first page of his Recursive Analysis as 
follows:

"
Recursive analysis is a free variable theory of function sin a rational field, 
founded on recursive arithmetic.
It involves no logical presuppositions and proceeds from definition to theory 
by means of derivation schemata alone.

The elementary formulae of recursive arithmetic are equations between terms, 
and the class of formulae is constructed from the elementary formulae by the 
operations of propositional calculus.
The terms are the free numeral variables, the sign 0 and the signs for 
functions.
The function signs include the sign S(x) for the successor function (so that 
S(x) plays the part of x+1 in elementary algebra) and signs for functions 
introduced by recursion.
The derivation rules are taken to be sufficient to establish the universally 
valid sentences of the propositional calculus, and include a schema permitting 
the substitution of terms for variables, the schema of equality

a = b -> {A(a) -> A(b)},

and the induction schema

A(0), A(n) -> A(S(n))
---------------------
        A(n)

the schemata for explicit definition of functions for any number of arguments, 
and finally schemata for definition by recursion.
The simplest definition schema for recursion, the schema of primitive 
recursion, is 

f(0, a) = g(a), f(S(n), a) = h(n, a, f(n, a))

Specifically this schema defines f(n, a) by primitive recursion from the 
functions g and h.
We take as initial primitive recursive functions the successor function S(x), 
the identity function I(x), defined explicitly by the equation I(x)=x, and the 
zero function Z(x) defined by Z(x)=0.
A function is said to be primitive recursive if it is an initial function or is 
defined from primitive recursion functions by substitution or by primitive 
recursion.
" Goodstein Recursive Analysis Chapter 1 Section 1 pg. 1 and 2

My reason for reproducing these in their entirety is not only because I will 
likely be referencing them frequently in the future as I develop N and the 
system of mathematics I've been incubating for a long while, but because they 
showcase a desperate attempt to give modern mathematicians the constant 
reassurance they need in order to accept a different way of looking at old 
things.

The largest difference between Goodstein's description of primitive recursive 
arithmetic at the beginning of Recursive Analysis (RA) and in Recursive Number 
Theory (RNT) is his inclusion of "the operations of the the propositional 
calculus" and "The derivation rules are taken to be sufficient to establish the 
universally valid sentences of the propositional calculus" in RNT.
This is done so as not to give someone interested in only his Recursive 
Analysis the sense that something fantastic has occurred in his previous book 
RNT.
In RNT, you see that the formalization does not include any explicit mention of 
the elementary operations of the propositional calculus.
This is because they are introduced as abbreviations for equations of a certain 
form!
In other words, their introduction is simply an identification of certain 
abstract similarities between the form of families of arithmetically equivalent 
expressions.
In Goodstein's own words

"
It is shown that a certain branch of logic is definable in the equation 
calculus and logical signs, and theorems, are introduced as convenient 
abbreviations for certain functions and formulae.
This branch of logic is characterized by the fact that it can assure the 
existence of a number with a given property only when the number in question 
can be found by a specifiable number of trials.
" Goodstein RNT pg. 11

Sadly, in order to cast his system R in a classical light, Goodstein used = 
when writing the formalization of his system.

"
The sign '=' here signifies that the expressions which stand on either side of 
it are equivalent so that either may replace, or be replaced by, the other; 
that is to say A1 and A2 express transformation rules by which one sign pattern 
may be transformed into another.
(There is of course another entirely different use of the equality sign '=' in 
mathematics to which we shall have occasion to refer later).
" Goodstein RNT pg. 14

This produces unnecessary conceptual complications when describing how the 
propositions and propositional functions abbreviate arithmetical acts.
Interestingly, he even identifies a correspondence between positive difference 
and "equality" or "equivalence" which is not only principle to his derivation 
of propositional calculus but his entire equational calculus.
By defining = as positive difference and using ~ as "equivalence" as is his 
intended use for '=' we see a method by which seemingly all of classical 
algebra can be brought closer to elementary arithmetic.

First, let's reformulate Goodstein's formalism of primitive recursive 
arithmetic using the notational conventions of N:

Schema Sb1
f ~ g
-------
f ~ g n

or (without fork notation)

  f ~ g
---------
f.n ~ g.n

Schema Sb2
  m ~ n
---------
f.m ~ f.n

Schema T
i ~ j
i ~ k
-----
j ~ k

where f and g are recursive functions and i, j, k, m, n are recursive terms, 
and the primitive recursive uniqueness rule

Schema U
(f S) = I g f
-------------
f = g^I f Z

where the iterative function g^I is defined by the primitive recursion

(g^I Z) ~ I
(g^I S) ~ g^I g I

The conventions of N make Goodstein's comments that g may contain additional 
parameters but g is a function of no more than two variables is implicit 
irrelevant.
Furthermore, his comment that in schema Sb1 the function g may be replaced by a 
term independent of some variable is irrelevant as there is no need for 
variable notation using the fork/claw and naming conventions of N.
Note, I is the right identity, Z is the constant zero function, and S is the 
(right) successor.

The defining equations for addition, predecession, subtraction, and 
multiplication are:

+`Z ~ I
+`S ~ S +

(P Z) ~ Z
(P S) ~ I

-`Z ~ I
-`S ~ P -

*`Z ~ Z
*`S ~ + *`I

Here ` is the "tie" or "bond" adverb and it is used to transform the left or 
right argument of a function by another function prior to being used e.g. 3 +`S 
4 gives 3 + S 4 which is 3 + 5 which is 7.

These rules can be restated in something more notationally and conceptually 
similar to Goodstein's original function notation by using N's function 
abstraction notation which is basically an N expression enclosed in square 
brackets where the letters m and n are numeral variables.
As it is, what has been presented is a combination of a combinator calculus, a 
functional calculus, and a calculus for relational algebra.





20151004T1536 Where is Apple going?
Prior to last month's Apple Special Event I had let out a series of tweets on 
the future of Apple products.
Prior to those tweets I had speculated on the the Apple Watch as a spectacular 
product that is really a huge investment in developing the type of "computer on 
a chip" and sensors one would need in order to successfully deploy VR tech to 
the general public.
In that series of tweets I also suggested that Apple would devote themselves to 
making reality more virtual rather than making virtual reality more real.

Tim Cook spoke at the last special event and used the phrase "A Single Plate of 
Glass" which was a signal that my thinking has been correct: by making products 
that are more virtual and less real they will transition the general public 
into an ideal harmony of virtual reality with reality.
The products the are converging to will become single transparent devices, 
perhaps mildly cloudy or textured as if it was aluminum (if they could make 
aluminum glass they would certainly use that) and market the beauty of their 
inner structure that will be seen through the translucence.
They've done it in the past, and they'll do it again, but now with a more 
specific intent on migrating the public to an interface more conducive to the 
types of interactions a user needs when working in a fully or semi immersive 
virtual space.

Most people think that the hardware they make now, with support for enhanced 
graphics for "games" and the like, are marked with a single purpose: give 
people what they want.
It's never that simple, not for any company, and especially not for a company 
as powerful as Apple.
For Apple, the iterative progress from high performance games to the types of 
high performance graphics needed to seamlessly match reality to virtual reality 
is but one of the many purposes of these current innovations.

When it comes to consumers using VR tech Apple knows all VR tech that has come 
out or is about to come out is ugly and obstructive.
We're far from brining VR to everyday life, but we can manage that gap by 
making reality more like VR, and in the process get the upper hand once the 
tech for VR is there in a form that is convenient for consumers.
They also know that as generations grow they can sometimes skip steps in 
teaching and training the public as long as a certain mass of the "younger" 
generation is already more comfortable with what older generations might call 
"nuisances" (these are all those weird things that 'older' and 'younger' people 
complain about when they just don't 'get' why everyone likes Apple and their 
products so much).

Apple's plans are made so that what they've done and what they ship gives them 
as much as an opportunity to make future connections as possible without trying 
to realize any specific future.
No matter how clearly and precisely one describes the outcome they desire, 
there is no plan that, once followed, guarantees its realization.
Not only might your desires become more than you could have possibly imagined 
in the past, but the opportunities that give a few people the upper hand occur 
in the moment, and aren't always hanging out for just anyone to grab at.
You have to give yourself a web of opportunities to catch any future prospects 
that might fly your way.

Mathematicians are very familiar with this method of solving problems.
We build entire theories, entire abstractions, and elaborate collections of 
propositions and proofs with the distinct purpose of setting the game up so 
that we may win.
The success of set theory but one of the many examples of a tool, a product, 
that lets us play a game we already know we can win.
Though, in math it is not seen so simplistically.
A set theorist will tell you that the origin of sets, collections, and their 
realization is born from a fundamental question about the nature of certain 
Fourier series by Cantor.
What they miss is that it is the fact that Cantor sought a clearer 
understanding of an unavoidable feature of mathematical structures that he came 
to his 'winning' theories of cardinal and ordinals using sets.
The problems had been there the entire time, their solutions were their too, 
but they were cast in a vague and seemingly mystical form.
Sets built a strong, clear, and exact web with which much has been caught since 
its creation.

What we see of Apple is only the veneer.
The shows, the products, the hardware, the software, these are all just the 
little flecks of icing that they throw our way every once and a while.
As much as what they show and give us is something they truly cherish and love, 
they have kept their focus by always looking towards fundamental limits.
Not just physical limits, but the psychological and social limits of brand, 
trust, and promise.
I imagine everyone who works on the products and presentations we see are 
proud, but I believe it's more likely they are even prouder of all that they 
can't tell us about what they've learned in the process of making the products 
we love.





20151003T2331 Notes from N.html that are better put in my Notebook


Draft Material 

Don't look down here unless you're ready to see what it takes to turn an idea 
into an innovation and an innovation into a revolution.
(seriously, you'll get the wrong impression if you're not prepared)
(I warned you. Any impression you have past this point is no longer my 
responsibility, no matter how false it might be.)

What do you mean by exact?
If you want to compare things within a tolerance then you have to do so 
explicitly, otherwise N only deals with exact values, no approximations.

But what about all those beautiful real numbers?
We have a lot to talk about... real numbers are really misleading.

Oh, so you just mean you're working with floats?
No.
You can imagine a way of working with N using floats, but it makes as much 
sense as tying toasters together until they're Turing complete.

Now you're really crazy! No reals? No floats!?
What about all those beautiful analytic results from functional analysis that 
are the bread and butter of signals processing!?

Ya... it's hard to accept, but we'll be fine without it, better even.

I don't believe you.
Don't take my word for it, Goodstein did most of the work in his Recursive 
Number Theory (1957) and Recursive Analysis (1961).

Oh, but that's all recursive stuff! No one wants to be worried about such a 
technical esoteric mathematical logical globidy gloopity: it's all just one big 
chore!

Life is one big chore.
Also, that's the exact response many people had to his original work: what a 
chore.
But, they never meant "what a chore" in a logical senses, they just found his 
evisceration of classical analysis revolting from a moral standpoint.
You might think of the response people had to early doctors learning of human 
anatomy by examining the dead.

Mathematics and morals? You have to be kidding.
Surprisingly, or perhaps not, mathematicians are people just like you or me.
They are prone to bouts of collective delusion just as much as any other group 
or club.
As computers have become more prevalent, it has been harder for certain 
delusions to persist: they must be confronted for us to continue moving forward.
Also, it's important to note that even Russell did not object to a philosophy 
congruent to that of Goodstein's works.
Rather he admitted that it was a path he did not have the heart to take after 
having confronted the foundations as he saw them.

Russell? Goodstein? Now you've gone and introduced philosophy into what you 
said was a practical problem.
This seems like nothing us mere mortals must worry ourselves with. 

Use space wisely.
Don't fear space.
Don't abuse space.
Balance space with proportion.
Space is the most powerful part of a sentence.

Use space to chunk N expressions.
When it comes to compound expressions, two or three basic parts is often 
conceptual limit.
Five basic parts is a HARD limit (wink wink).

Separate nouns from verbs i.e. keep data away from the actions performed on it.
Similarities between subexpressions should be explicitly collected into a 
single subexpression e.g.

(3*x)+(3*y)+(3*z)

(3*x) + (3*y) + (3*z)

(3 * x) + (3 * y) + (3 * z)

(3*x)+ (3*y)+ (3*z)

(3* x)+ (3* y)+ (3* z) 

(3* x)+ (3* y)+ 3* z  This is an efficient way to write expressions with dyads

(3*x)+(3*y)+3*z  valid, but perhaps not wise

+/3* x,y,z This is how one would actually write and think of this expression

a:x,y,z    Or, as is more likely, this is the way it would be written
+/3*a



(x+1)*(x+2)*(x+3)

(x+1) * (x+2) * (x+3)

(x + 1) * (x + 2) * (x + 3)

(x+ 1)* (x+ 2)* (x+ 3)

(x+ 1)* (x+2)* x+ 3

(x+1)*(x+2)*x+3

*/x+ 1+!3

a:1+!3  remember, separate data from acts (nouns from verbs)
*/x+a



(x)+(x*(1+x))+(x*(1+x)*(2+x))+(x*(1+x)*(2+x)*(3+x))

x + (x*(1+x)) + (x*(1+x)*(2+x)) + (x*(1+x)*(2+x)*(3+x))

x + (x * (1+x)) + (x * (1+x) * (2+x)) + (x * (1+x) * (2+x) * (3+x))

x + (x * (1 + x)) + (x * (1 + x) * (2 + x)) + (x * (1 + x) * (2 + x) * (3 + x))

x+ (x* 1+ x)+ (x* (1+ x)* 2+ x)+ (x* (1+ x)* (2+ x)* 3+ x)

x+(x*1+x)+(x*(1+x)*2+x)+x*(1+x)*(2+x)*3+x

+/ (*/ x+!)" 1+!4

+/(*/x+!)" 1+!4

a:1+!4
+/(*/x+!)"a

a:1+!4
b:(*/x+!)" a
+/b

a:1+!4
f:(*/x+1)"
+/f a



1*2*3*4*5*6

1 * 2 * 3 * 4 * 5 * 6

1* 2* 3* 4* 5* 6*

*/ 1,2,3,4,5,6

*/ 1 2 3 4 5 6

*/1+! 6

*/ 1+!6

a:1+!6
*/a



x*(x+1)*(x+2)*(x+3)*(x+4)

x * (x+1) * (x+2) * (x+3) * (x+4)

x * (x + 1) * (x + 2) * (x + 3) * (x + 4)

x* (x+ 1)* (x+ 2)* (x+ 3)* x+ 4

x*(x+1)*(x+2)*(x+3)*x+4

*/ x,(x+1),(x+2),(x+3),x+4

*/ x+ 0,1,2,3,4

*/ x+ !5

*/x+ !5

*/x+! 5

x */+ !5

a:!5
*/x+a

a:!5
x */+ a



(1%t)*((2^t)%(1+t))*(((1+1%2)^t)%(1+t%2))*(((1+1%3)^t)%(1+t%3))

(1%t) * ((2^t)%(1+t)) * (((1+1%2)^t)%(1+t%2)) * (((1+1%3)^t)%(1+t%3))

(1 % t) * ((2 ^ t) % (1 + t)) * (((1 + 1 % 2) ^ t) % (1 + t % 2)) * (((1 + 1 % 
3) ^ t) % (1 + t % 3))

(1% t)* ((2^ t)% 1+ t)* ((^t 1+ 1% 2)% 1+ t% 2)* (^t 1+ 1% 3)% 1+ t% 3

*/ 1%t , (^t 1+ 1%) % (1+t%) 1+!3

a:1+!3
b:1%t, (^t 1+ 1%)%(1+t%) a
*/b

a:1+!3
f:(^t 1+1%) % 1+t%
*/ (1%t),f a

a:1+!3
u:*/(1+1%a)^t
v:*/1+t%a
1%t * u%v

a:1+!3
u:*/^t 1+1%a
v:*/ 1+t%a
1%t * u%v

Note */ (1%t), (^t 1+ 1%)%(1+ t%) 1+!n is the n-th partial product of Euler's 
product expression for Gamma of t .

n choose k
(*/ n-) % (*/ 1+) !k
*/ (n-) % (1+) !k
*/ n- % 1+ !k
*/ n-%1+ !k
*/(n-%1+) !k
*/n-%1+ !k

A method of computing the k-th Laguerre polynomial at x
+/ (*/ x^ , 0n1^ , k-%1+.! , 1%*/.1+!) !1+ k

Another method (though the same, just not explicit i.e. breaking into parts)
C:{*/x-%1+ !y}   n C k means n choose k
f:*/1+!         f k means factorial of k
+/ (*/ x^ , 0n1^ , k.C , 1%f)!1+ k

Representation is not, in itself, enough to justify this notation.
Ease of manipulation and experimentation is what matters.
Eliminate variables, replace with pronoms.


N-dimensional statements are often simpler and clearer than their specific 
instances.
This is achieved using Iverson's adverb constructions.
Summation notation is replaced with +/ .
For example +/x+y*!z sums the first z terms of an arithmetic sequence.
It can be read from left to right as "plus over x plus y times enumerate z".
 
The unary verb ! is called enumerate.
It stores a rectangular array of consecutive numerals in row-major form.

 !0
()
 !10
0 1 2 3 4 5 6 7 8 9
 !3 3
0 1 2
3 4 5
6 7 8
 !3 3 3
 0  1  2
 3  4  5
 6  7  8

 9 10 11
12 13 14
15 16 17

18 19 20
21 22 23
24 25 26

Some may prefer to allocate !27 and store its shape somewhere else instead of 
using !3 3 3 .

"
Given a k-dimensional array with c-word elements A I"!k for (0<|=I) & 
I<|=d (or &/ <|=2\ 0,I,d) we can store it in memory so that 
(A LOC I) = +/ (A LOC k#0), (*/ c, I, {1+ (1+x)_ d"!k})" 1+ !k
or
a:*/ c, {1+ (1+x)_ d !k}
(A LOC I) = +/ (A LOC k#0), a*I" 1+ !k
"From TAOCP V1E3 pg.299 (translated to N)

Notice, the math described here is also the code needed to implement this 
allocation method.

The meaning of the expression +/x+y*!z has great generality.
Each term of the sum is made by applying x+y* to an item of !z .
The expression x+y* or x+ y* stands for an arithmetical sequence starting at x 
with step size y.
An arithmetic sequence starting at 2 with step size 3 named s would be defined 
by writing

 s:2+3*
 s 0
2
 s 1
5
 2+3* 2
8
 2+ 3* 2
8
 s.2
8
 s 0 1 2 3
2 5 8 11
 s.0 1 2 3
2 5 8 11
 s.!4
2 5 8 11
 s !4
2 5 8 11
 s!4
2 5 8 11
 s"!4
2 5 8 11
 s"1 !4
2 5 8 11

Often, there is more than one way to say the same thing.

In general if s takes a numeral and returns a term of a sequence (i.e. if s is 
a verb) then +/s!n sums over the first n terms of s .
s need not give a simple numeral.
It may produce a matrix or higher dimensional array.
Suppose s:2*,2+,2- which is read "a is two times append two plus append two 
minus" then

 s 3
6 5 0n1

That is s 3 (or s.3) gives the vector 6 5 0n1 whose first component is 6, 
second component is 5, and third component is negative one.
Then

 a:2*,2+,2-
 (+/a"!3)=(a 0)+(a 1)+a 2
1
 (+/a"!3)=(2*,2+,2- 0)+(2*,2+,2- 1)+2*,2+,2- 2
1
 (2*,2+,2- 0)=(2*0),(2+0),2-0
1
 (2*0),(2+0),2-0
0 2 2
 (2*1),(2+1),2-1
2 3 1
 (2*2),(2+2),2-2
4 4 0
 (+/a"!3)=+/0 2 2;2 3 1;4 4 0
1
 +/0 2 2;
   2 3 1;
   4 4 0
6 9 3
 +/a"!3
6 9 3

That example shows how one can explore the meaning of the notation and play 
with math and computer science.
It is a simple example, and many find it unfamiliar, preferring the classical 
summation notation.
The reason is that in these simple examples, that is in the classical uses of 
summation notation, N seems clumsy.
The strength of N is in the simplicity of statements that are otherwise hard to 
express using classical summation notation.


N and Sums

Knuth, Graham, and Patashnik's Concrete Mathematics is probably the most 
passionate love poem to summations.
In it they embrace Iverson's square bracket notation, but ignore his / adverb.
Let's start by considering why they worship classical summation notation.
First, they call it "generalized Sigma-notation" on pg.22 and contrast it with 
delimitated summation.
Their first significant evidence is the sum of the first 100 odd squares.
First they present the "generalized Sigma-notation" for the sum of the squares 
of the first 100 odd integers:
  ___
  \      2
  /__   k  
0<k<100
  k odd

This is contrasted with the delimited form:

 ___49
 \           2
 /__   (2k+1)
   k=0

The savings being that the idea "sum the squares of all odd integers below 100" 
is better communicated in the former than the latter.
The purpose is to focus our attention on the information that we're not just 
squaring any numbers, we're square odd numbers.
And not just any odd numbers, those odd numbers that are less than 100.
The argument is that (1+2*k)^2 is a "bad" way to represent the square of an odd 
number when manipulating it in a proof using English as its metalanguage.
(one might write it as {^2 1+ 2*}k )

My contention is that N provides a much simpler interface to these concepts 
(both on a chalk board, in a book, and yes... on a computer).
First, the savings in generalized Sigma-notation comes from its encoding of the 
information "odds less than 100" by placing constraints on k.
Those constraints being "0<k<100" {0< & <100}k (read "zero less 
and less one hundred k") and "k odd".
I would write this sum using N as follows.
First let olt mean "odds less than".
You might write it casually as:

olt:   odds less than

where everything following "   " is a comment.
You would read the expression "olt:   odds less than" as "olt is (or stands 
for) odds less than"
That being the principle information we must communicate.
I would then write the sum +/ ^2" olt 100 which reads "plus over power two each 
olt one hundred".
Or, for someone familiar with this notation (just as you would have to be to 
use "generalized sigma notation") it reads
"sum of the squares of odds less than one hundred"
Notice, there is no extraneous variable "k" there is only the information on 
what actions are to be performed.
Now, the real difference between the generalized sigma notation and the 
delimitated form is how we represent odds less than one hundred.
This is left un identified in N as we've used it.
To review, the whole N statement would be

olt:   odds less than
+/ ^2" olt 100

where it is obvious we have yet to specify how olt is "constructed".
In CM they've chosen to represent odd numbers as the result of applying 1+2* to 
a numeral.
One might define odd:1+2* so that odd n gives the n-th odd number.
Though, this doesn't make it easy to know whether odd.23 is less than 100 or 
not.
We could find out easily by listing out the first 100 odd numbers and seeing 
where they are less than 100:

 ?(odd!100)<100
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 
30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49

This, being very close to k code, is read "where odd of enumerate 100 less 100".
An someone used to this notation would say "where the first 100 odds are less 
than 100".
We could select the maximum number (49) with

&/?(odd!100)<100

Which is read "meet over where odd of enumerate 100 less 100".
In this "pidgen" English you can instantly extract its meaning and means of 
computation.
So one could write the delimited form as

odd:1+2*
+/ ^2 odd &/?100>odd!100

Or, more as is done in Concrete Math,

+/^2 1+2*!49

Which is just as unexpressive as the standard delimitated sigma notation for 
such an expression.

The binary operation - is likely to be replaced by monus when used on natural 
numerals (chars).
When applied to an integer quantity (ints), like 0n2, it becomes minus.
Arguments are automatically promoted to the meet of their respective types.

As an example of using - with natural numerals and integer numerals:

 0-3
0
 5-6
0
 6-5
1
 0n0-5
0n5
 5n6-5
0n6
 

The notation for numerals is best thought of the building of compound nouns 
(komposita) as in Deutsch.

Some notational conventions
Bounds on the values expected for a pronom are denoted by mx and Mx

 {mx< & <Mx}x

Notice that {mx< & <Mx} or mx< & <Mx is much different from 
{mx <&< Mx} or mx {<&<} Mx or mx <&< Mx

For binding over a calculated value or compound noun parenthesis are needed

 <(3 4 5) 4
0 0 1
 <3 4 5 6
0 0 0
 <3 (4 5 6)
0 0 0
 <(3) 4 5 6
0 0 0
 <(3+4) 4 5 6
1 1 1
 <7 (4 5 6)
1 1 1
 <7 4, 5, 6   / This gives some sense to this system of evaluation
1 1 1


 ? <100 odd! 100
 ? <100 odd.! 100

 ?. <100. odd. !. 100

 (?@<100@odd@!)


  ! " # $ % & ' ( ) * + , - . /
0 1 2 3 4 5 6 7 8 9 : ; < = > ?
@ A B C D E F G H I J K L M N O
P Q R S T U V W X Y Z [ \ ] ^ _
` a b c d e f g h i j k l m n o
p q r s t u v w x y z { | } ~

Classical Compositions
(f g h y) and (f @ g @ h y) give (f (g (h y)))

f
|
g
|
h
|
y

(x f g h y) and (x f @ g @ h y) give (x f (g (h y)))

  f
 / \
x   g
    |
    h
    |
    y

(g h y) and (g @ h y) give (g (h y))

g
|
h
|
y

(x g h y) and (x g @ h y) give (x g (h y))

  g
 / \
x   h
    |
    y

(f0 f1 ... fn g h y) and (f0 @ f1 @ .. @ fn @ g @ h y) give (f0 (f1 ..(g (h 
y))..))

f0
|
f1
|
:
|
fn
|
g
|
h
|
y

(x f0 f1 .. fn g h y) and (x f0 @ .. @ fn @ g @ h y) give (x f0 (f1 ..(g (h 
y))..)

  f0
 / \
x   f1
    |
    :
    |
    fn
    |
    g
    |
    h
    |
    y

Forks
(f g h) is a fork of f`g`h
((f g h) y) gives ((f y) g (h y))

  g
 / \
f   h
|   |
y   y

(x (f g h) y) gives ((x f y) g (x h y))

    g
   / \
  f   h
 /|   |\
x y   x y

Claws
(g h) is a claw of g`h
((g h) y) gives (g (h y))

 g
 |
 h
 |
 y

(x (g h) y) gives (g (x h y))

  g
  |
  h
 / \
x   y

(f ]) and (f [) makes f monadic (on either the left or right argument)

Trains
(f0 f1 .. fn g h) is a train of f0`f1`..`fn`g`h
if n is even then the train (f0 f1 .. fn g h) is a fork of forks
if n is even ((f0 f1 .. fn g h) y) gives ((f0 y) f1 (..((fn y) g (h y))..))

f1
| \
f0 f3
|  | \
y  f2 f5
   |  | \
   y  f4 :
      |   \
      y    g
          / \
         fn  h
         |   |
         y   y

if n is even (x (f0 f1 .. fn g h) y) gives ((x f0 y) f1 (..((x fn y) g (x h 
y))..))

  f1
  | \
  f0 f3
 /|  | \
x y  f2 f5
    /|  | \
   x y  f4 :
       /|   \
      x y    g
            / \
           fn  h
          /|   |\
         x y   x y

if n is odd then the train (f0 f1 .. fn g h) is a claw of forks
if n is odd ((f0 f1 .. fn g h) y) gives (f0 ((f1 y) f2 (..((fn y) g (h y))..)))

f0
|
f2
| \
f1 f4
|  | \
y  f3 :
   |   \
   y    g
       / \
      fn  h
      |   |
      y   y

if n is odd "x(f0 f1 .. fn g h)y"gives "(f0 (x f1 (..((x fn y)g(x h y))..)))"

  f0
  |
  f2
  | \
  f1 f4
 /|  | \
x y  f3 :
    /|   \
   x y    g
         / \
        fn  h
       /|   |\
      x y   x y


a b c  # ! 0  A B C
d ` e  1 2 3  D ' E
f g h  4 5 6  F G H
i j k  7 8 9  I J K  
l m n  + - *  L M N
  o    :   ;    O   
p q r  < = >  P Q R
s . t  | ~ &  S , T
u v w  $ _ @  U V W
x y z  % ^ ?  X Y Z
{ ( [  / " \  ] ) }

The notation used here is highly idiomatic and mnemonic.
It is inspired by Iverson's J and APL, and Whitney's k.
Its use as a tool for thinking about math is inspired by Goodstein.
It's use for programing is inspired by Iverson.

There is one order of operations: expressions are evaluated from right to left.
Expressions are read aloud from left to right.

!0 gives the empty or null list ()

Rather than write

   !0     input     antecedent
--------- calculate action
   ()     output    consequent

we write

   !0
()

so that the antecedent event is preceded by some space and the output is not.
This sequence of events is read "enumerate zero gives the null list".
Although a native English speaker might say "Enumerating zero gives the empty 
list".

This sequence of events shows the result of enumerate of a few natural numerals.

   !0
()
   !1
0
   !2
0 1
   !10
0 1 2 3 4 5 6 7 8 9
   !15
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

We can describe ! in terms of itself using join.

   !10
0 1 2 3 4 5 6 7 8 9
   (!9),9
0 1 2 3 4 5 6 7 8 9

To show how to calculate (!9),9 we separate it into a sequence of intermediate 
events.

       (!9),9          enumerate nine join nine
---------------------  evaluate enumerate nine
(0 1 2 3 4 5 6 7 8),9  join the list 0 1 2 3 4 5 6 7 8 to nine
---------------------  calculate
 0 1 2 3 4 5 6 7 8 9   the vector 0 1 2 3 4 5 6 7 8 9

Notice that there are many different ways of saying the same thing.
Only context can tell us why we might say things one way rather than another.
For example we could have written the previous sequence of events as follows:

      (!9),9
-------------------
0 1 2 3 4 5 6 7 8,9
-------------------
0 1 2 3 4 5 6 7 8 9

The difference is subtle, and it often is, but some contexts benefit from the 
one over the other.
Here is a, seemingly complex, way of calculating the result of evaluating '!5'

                    !5
----------------------
                (!4),4
----------------------
            ((!3),3),4
----------------------
        (((!2),2),3),4
----------------------
    ((((!1),1),2),3),4
----------------------
(((((!0),0),1),2),3),4
----------------------
 ((((((),0),1),2),3),4
----------------------
     ((((0),1),2),3),4
----------------------
       (((0,1),2),3),4
----------------------
       (((0 1),2),3),4
----------------------
         ((0 1,2),3),4
----------------------
         ((0 1 2),3),4
----------------------
           (0 1 2,3),4
----------------------
           (0 1 2 3),4
----------------------
             0 1 2 3,4
----------------------
             0 1 2 3 4

Someone familiar with enumerate of a numeral would probably just write

   !5
0 1 2 3 4

Some people might say "count out five" for !5 or maybe "aufzuzählen fünf".

"Why use ! for enumerate when we already use it to talk about factorials?" you 
ask.
First, you can write */!n for the factorial of n-1.
Second, you can do much more with ! this way than as a factorial (MUCH more).
Second and a half, factorial isn't nearly as simple or general as ! as 
enumerate.
Third, calculating with factorial is difficult.
It occurs frequently, and grows fast.
It is only one of a large family of similar functions.
Pick your favorite binary operation T then T/!n gives you its factorial like 
extension.
In general T/a is read "T over a".
If a:2 4 0 1 12 then (T/a)=2 T 4 T 0 T 1 T 12.
If the idea of right to left evaluation is new you might prefer to think of it 
this way
(T/a) = (2 T(4 T(0 T(1 T 12))))
So if T:+ then (T/a)=+/a and
(+/a) = 2+4+0+1+12
So you might notice a familiar factorial like function with T:+
(+/!5) = 0 + 1 + 2 + 3 + 4
In general, if n is a natural numeral then
(+/!n)=(n-1)*n%2
We can prove this by induction on n.
First, we assume that n is a natural numeral and not something like a function 
or vector.
Notice that (+/!0)=+/() and 0=(0-1)*0%2 so that +/() should give 0 (and it 
does).
See that () looks like 0.
It often plays the same role as 0 when calculating with vectors.
So we know (+/!0)=(0-1)*0%2.
Suppose (+/!n)=(n-1)*n%2 for some numeral n.
By basic arithmetic and algebra the following list of expressions give the same 
numeral:
+/!1+n
+/(!n),n
(+/!n)+n
n+ +/ !n
n+ (n-1)* n%2
((2*n)+(n-1)*n)%2
(2+n-1)*n%2
(1+n)*n%2
(1+n)*(1+n-1)%2
This completes the induction.

Another familiar family of functions are the repeated T functions.
Again, pick your favorite dyad T then T/# gives its repeated extension.
We'll let T:+ in the following sequence of equivalent expressions.
3+/#4
+/3#4
+/4 4 4
4 + 4 + 4
12
Notice that 3+/#4 is the same as 3*4.
This is not surprising since multiplication is often synonymous with repeated 
addition.
We can continue this process to power or exponentiation:
3*/#4
*/3#4
*/4 4 4
4 * 4 * 4
64
The name for repeated exponentiation was created by Goodstein: tetration.
In the following expressions let T:*/#
3T/#4
T/3#4
T/4 4 4
4 */# 4 */# 4
13407807929942597099574024998205846127479365820592393377723561443721764030073546
976801874298166903427690031858186486050853753882811946569946433649006084096
Which is an example of a number that would have given C programmers a small 
headache "back in the day".
The hierarchy of repeated extensions of + (or of succession) is very 
interesting.
As you climb up it's easy to make functions that grow faster than what we tend 
to imagine as being "fast".

The number of items in a list is returned by count:
n=#!n
4=#4 4 4 4
(#1 2 3 4)=#0 1 2 3

Why use such a strange notation to say what we already say with regular math 
notation?
These notational conventions reveal similarities between operations that are 
often defined using wildly different notation or notions.
As has already been shown, any operation that is born from the repetition is 
give as T/#.
There is more to this notation if we consider binary operations as parametrized 
unary operations (this is in the spirit of currying, but has been surprisingly 
familiar to accountants around the world for a long while).
Suppose you ask an accounting to add the following list of numerals:
12 23 424 244 3535 54
If their calculator wasn’t already cleared they would clear it and start by 
typing 54 then + which would add 54 to 0 since clearing a calculator starts its 
value at 0.
Then they would type 3535+ followed by 244+, 23+, and 12+ finally reaching the 
sum 4292.
We summarize these steps by writing:
12+ 23+ 424+ 244+ 3535+ 54+ 0
Thus it is revealed that 54+ is an operation called "add 54 to" or "54 plus".
It is one of a wide family of operations e.g. 12+ "add 12 to" or 23+ "add 23 to"
Now, the expression
12+ 23+ 424+ 244+ 3535+ 54+ 0
becomes the successive composition of parametrized unary operators applied to 0.
Grouping with parenthesis will separate out the actions from the noun:
(12+ 23+ 424+ 244+ 353+ 54+) 0
It is this simple example which suggests a strong and far reaching 
generalization: use space, or a lack there of, to group and build and denote 
different compositions of actions.
One might write
(-12 12+ 8- +23) 0
for what is traditionally written (and should/still can be written conveniently 
as)
(12+8-23+0)-12
But, in the second case, we have removed any sense of successive actions.
There is also the issue of "ambiguity" as to whether -12 is an operation "minus 
twelve" or denotes a quantity "negative twelve".
The resolution of this ambiguity is found in the origin of negative quantities 
as a tool for recording debits and credits, or gains and losses.

Historically, the natural numerals 0 1 2 3 4 5 and so on were the only 
quantities worth considering as "actual", "real", or "existent" quantities.
(though it took some time for 0 to be thought of as denoting a quantity which 
could be "measured" as we tend to use the word today)
The idea which led to today's conception of "negative numbers" is found in 
recording gains and losses using natural numerals.
A gain of 3 and a loss of 4 would be interpreted as a net loss of 1.
The notion of "net loss" or "net gain" is born from the canonicalization of a 
table of gains and losses.
These canonical "net gains" or "net losses" are what developed into the modern 
mathematician's positive and negative numbers i.e. integers.

We consider a single trading event which is recorded as a quantity of gains and 
a quantity of losses.
A gain of 3 and loss of 4 is denoted 3n4
A gain of 5 and loss of 25 is denoted 5n25
A gain of 0 and loss of 15 is denoted 0n15
A gain of 3 and loss of 0 is denoted 3n0 or, simply, 3.
Thus the unary operation "negate", which is denoted either 0- or -: works as 
follows
   -:3
0n3
   0- 3
0n3
   0 - 3
0n3
   - 3
0n3
   -3
-3
In the last case -3 is a unary operation "minus three" where as - 3 is "negate 
3".
Notice that "negate three" is a two word command and the verb "negate" is 
separated from the noun "three" by a space just as in - 3.
The convention that -:3 represents "negate 3" is from :'s use as an adverb when 
put right next to a verb.
There is much more that can be said about that later.

For those who object out right to the use of 0n3 to denote what is commonly 
referred to as "negative three" I will suggest a bit of patience, as many 
numbers are the result of this same general pattern, and we should consider the 
origins of such concepts as not only historically interesting but interesting 
from a pedagogical or behavioral perspective.

Consider fractions.
Given a pie we are asked how to share it among four children.
Certainly each child wishes to have part of the pie, and no one child would 
wish their part to be smaller than any other's part.
Since there are four children, the whole pie must be broken into four parts.
Since no child wishes to have less than another, the parts must be equal.
Finally, we give each child one of the four parts of the pie.
From this simple, and commonly occurring event, we get numbers for sharing i.e. 
fractions.
The one part each child receives from the whole pie is said to be "one forth" 
of the pie.

Suppose a party is planned and eight children are to be there.
The pie is cut into eight parts, each being equal to the other in order to 
eliminate any sense of "unfairness" among the children.
Due to inclement weather, only four children end up at the party.
These children, still wishing to have their fair share of pie, are given two 
out of the eight parts of the whole pie.
Thus, each child is said to have received "two eighths" of the pie.

The realization that "one forth" of a pie is the same amount of pie as "two 
eights" of a pie comes from comparing the relative sizes of the resulting 
collection of pie pieces.
Here "one of four parts" is denoted by 1f4 (though, ultimately the use of 'f' 
is inconsequential as long as it is easily distinguished from the other numeral 
notation introduced here).

The question "is one of four parts the same as two of eight parts" is asking 
whether two methods of dividing a whole into parts and then selecting some of 
those parts give the same quantity.
Here, the single trading event is an act breaking a whole into an equal number 
of parts and selecting a number of those parts.
We record such an event by making a table of the number of equal parts the 
whole is broken into and the number of parts selected.
So 1f2 records an event where some whole is broken into two equal parts and one 
of those parts is retained.
Similarly 2f4 records an event where some whole was split into four equal parts 
and two of those parts were kept.
The fact that a record of 1f2 is similar to a record of 2f4 (given the same 
whole) is what leads to the modern mathematician's notion of rational numbers.
The use of canonical records (e.g. 1f2 for 2f4 or 16f32) is what most people 
think of as "rational numbers" rather than the companion concept of "fractions".

What has been revealed thus far is a bit misleading.
Certainly, there is reason to have notation for "negative numbers" or "rational 
numbers" but is it necessary to separate -:3 from 0n3?
One could just as easily write 0-3 for 0n3 since 0n3 is the result of 
evaluating 0-3.
In general k-(3+k) gives 0n3 for any numeral k.
Ultimately, it is a matter of choice which notation is used in which situation, 
but it is important to know that 0n3 is a noun, a record of an event, where as 
0-3 is a command which says "subtract 3 from 0" and in some contexts the one is 
much preferable to the other.

Similar statements apply to fractions when division is introduced.
Here 2%4 gives 1f2 which is the same as the result of 1%2 or 16%32.

What about negative fractions?
   -:2%4
0n1f2
Some people might find such numbers mixed with letters frightening, and that's 
okay.
One thing that this reveals though is a very important thing about fractional 
numerals: they are really triples of natural numerals.
The quantity denoted by 3n4f2 is equivalent to 0n1f2 and not 3n2f1.

One can also use (-2%3) to stand for 0n2f3, but the parenthesis matter in this 
case since - is only interpreted as "negate" because there is a parenthesis to 
its left and evaluation always precedes from right to left.

-2%3
-----
-2f3
-----
0n2f3

Notice that no parenthesis are used here.
That is because there is nothing else that the expression -2%3 is part of so it 
is easily known that - means "negate" in this context.

In the notation used here the result of evaluating (-2%3) is different from the 
result of evaluating (-2 %3).
As has been said (-2%3) gives 0n2f3 but

 -2 1%3
-------
 -2 1f3
-------
 1f3-2
-------
1f3-6f3
-------
 0n5f3

There is certainly good reason for perhaps interpreting -2 %3 in a different 
way.
Some might wish for -2 %3 to give a vector 0n2 1f3.
For them there are at least two choices: (-2) (%3) or -:2 %:3.

Having gotten this far you might wonder if it's really all worth it.
Math seems to be doing fine without worrying about this unfamiliar notation 
that seems to say only what has already been said about math.
The "superficial" point of difference of this notation is that mathematical 
expressions built with this notation are immediately computable so that 
experimentation is as simple as with classical calculators only much more is 
expressible with only a minimal amount of new notation.

The true value, purpose, or relevance of this notation is hard to communicate 
because its fundamental point of difference either goes unseen by most 
mathematicians/computer scientists or is outright denied as being a "relevant" 
point of difference.
First, it promotes a finite perspective on mathematics.
Some use the words "constructive" or "intuitionistic", but neither is fully 
meant here.
A consequence of following this notational discipline is that the mathematics 
done is constructive, as far as "being constructive" has a well defined meaning 
in the first place.
Furthermore, results are "intuitionistic" in that they will not be beyond what 
can be expressed in an mathematics built on an intuitionistic logic.
But in both cases, neither constructivity nor intuitionistic constraints are 
what is aimed at.

As a slogan, one might say, as Leopold Kronecker did, that "God made the 
integers; all else is the work of man", but I would rather say that humans 
created numerals and all else is the work of humans.
Perhaps another slogan is Feynman's (or Dirac's or Mermin's) "Shut up and 
calculate".
Though their use of the phrase would permit mathematical tools which are denied 
here (specifically tools from non-recursive analysis).

Some Collected Drafts
a<|=b       a less or equal b
*/1+!n      times over one plus enum n (factorial of n)
+/%#a       plus over quotient num of a (average of a)
+/a*x^!#a   polynomial in x with coefficients a (p:{+/x*y^!#x} so that (1 3 5 
7)p is a polynomial operator)
(+/!1+n)=n*(1+n)%2
(!n m)=(!n);n+!m
(x^y)=*/x#y
(*/x#y)=x*/#y claws can be very helpful
(+/a*x^!#a)=+/a**/x#"!#a suggest dyadic ^ is suspect like ! as factorial.
(+/a*x^!#a)=*/b-x for some b with (#b)=#a (fundamental theorem of algebra) (add 
awesome generalizations that simplify)

e:*/#         /power i.e. x e y is "x to the y"
r:{x*/+(!y)}  /raising factorial power i.e. 'x r y' is "x to the y rising"
f:{*/!1+x}    /factorial of
a:            /finite sequence of numerals
b:            /finite sequence of numerals
F:{+/(a r\: * y e % b r\: * f)!x}
k F           /k-th partial sum of a,b-hypergeometric operator

+/(a (*/+)\: * z (*/#) % b (*/+)\: * (*/!:1+))!k  /k-th partial sum of 
a,b-hypergeometric function at z

The following draft material is written using an older form of N notation.
A language grows, it doesn't just blip into existence overnight.

Notes on Basic Math by Serge Lang

Contents

Part I Algebra

Chapter 1 Numbers
The integers
Rules for addition
Rules for multiplication
Even and odd integers; divisibility
Rational numbers
Multiplicative inverses

Chapter 2 Linear Equations
Equations in two unknowns
Equations in three unknowns

Chapter 3 Real Numbers
Addition and multiplication
Real numbers: positivity
Powers and roots
Inequalities

Chapter 4 Quadratic Equations

Interlude On Logic and Mathematical Expressions
On reading books
Logic
Sets and elements
Notation

Part II Intuitive Geometry

Chapter 5 Distance and Angles
Distance
Angles
The Pythagorean theorem

Chapter 6 Isometries
Some standard mappings of the plane
Isometries
Composition of Isometries
Congruences

Chapter 7 Area and Application
Area of a disc of radius r
Circumference of a circle of radius r

Part III Coordinate Geometry

Chapter 8 Coordinates and Geometry
Coordinate systems
Distance between points
Equations of a circle
Rational points on a circle

Chapter 9 Operations on Points
Dilations and reflections
Addition, subtraction, and the parallelogram law

Chapter 10 Segments, Rays, and Lines
Segments
Rays
Lines
Ordinary equation for a line

Chapter 11 Trigonometry
Radian measure
Sine and cosine
The graphs
The tangent
Addition Formulas
Rotations

Chapter 12 Some Analytic Geometry
The straight line again
The parabola
The ellipse
The hyperbola
Rotation of hyperbolas

Part IV Miscellaneous

Chapter 13 Functions
Definition of a function
Polynomial functions
Graphs of functions
Exponential function
Logarithms

Chapter 14 Mappings
Definition
Formalism of mappings
Permutations

Chapter 15 Complex Numbers
The complex plane
Polar form

Chapter 16 Induction and Summations
Induction
Summation
Geometric Series

Chapter 17 Determinants
Matrices
Determinants of order 2
Properties of 2 by 2 determinants
Determinants of order 3
Properties of 3 by 3 determinants
Cramer's Rule

Numbers

The Integers
(Z. *. 0 <) n means n is a positive integer e.g. 1 2 3 4 5 6 7 8 9 10 11
0 = n means n is zero
N. n means n is a natural number i.e. zero or positive integer
natural number line with origin labeled 0
(Z. *. 0 >) n means n is a negative integer e.g. _1 _2 _3 _4 _5 _6 ..
Z. n means n is an integer (zero, positive integer, negative integer)
integer number line as iterated measurement from 0
addition as iterated motion on the number line
(Z. n) implies (n = n + 0) and n = 0 + n
n - ~ as (- n) +   subtraction as adding a negative
(Z. n) implies (0 = n + - n) and 0 = (- n) + n
n and - n are on opposite sides of 0 on the standard number line
read - n as "minus n" or "the additive inverse of n"

Rules For Addition
(n + m) = m + n                   commutative
((n + m) + k)=n + m + k           associative
0 = n + - n                       right inverse
0 = (- n) + n                     left inverse
n = - - n                         idempotent
(- n + m) = (- n) - m             negation distributes over addition
(*. / 0 < n) implies 0 < + / n    positive additivity
(*. / 0 > n) implies 0 > + / n    negative additivity
(n = m + k) implies m = n - k     left solvable
(n = m + k) implies k = n - m     right solvable
((n + m) = n + k) implies m = k   cancelation rule
(n = n + m) implies m = 0         unique right identity
(n = m + n) implies m = 0         unique left identity

Rules For Multiplication
(n * m) = m * n                   commutative
((n * m) * k) = n * m * k         associative
n = 1 * n                         identity
0 = 0 * n                         annihilator
(n * (m + k)) = (n * m) + n * k   left-distributive
((n + m) * k) = (n * k) + m * k   right-distributive
(- n) = _1 * n                    minus is multiplication by negative one
(- n * m)=(- n) * m               minus permutes over multiplication
(- n * m) = n * - m               minus permutes over multiplication
(n * m) = (- n) * - m
(n ^ k) = * / k #: n              exponentiation is iterated multiplication
(n ^ m + k) = (n ^ m) * n ^ k
(* / n ^ m) = n ^ + / m
(n ^ m ^ k) = n ^ m * k
(n ^ * / m) = ^ / n , m
((n + m) ^ 2) = (n ^ 2) + (2 * n * m) + m ^ 2
(*: n + m) = (*: n) + (+: n * m) + *: m
((n - m) ^ 2) = (n ^ 2) - (2 * n * m) + m ^ 2
(*: n - m) = (*: n) - (+: n * m) + *: m
((n + m) * n - m) = (n ^ 2) - m ^ 2
((n + m) * n - m) =(*: n) - *: m
n ((+ * -) = (*: [) - (*: ])) m

Even And Odd Integers; Divisibility
odd integers: 1 3 5 7 9 11 13 ..
even integers: 2 4 6 8 10 12 14 ..
'n is even' means n = 2 * m for some m with Z. m
'n is odd' means n = 1 + 2 * m for some m with Z. m
if E means even and I means odd then
 E = E + E and E = I + I
 I = E + I and I = I + E
 E = E * E and I = I * I
 E = I * E and E = E * I
 E = E ^ 2 and I = I ^ 2
 1 = _1 ^ E and _1 = _1 ^ I
n (-. |) m means "n divides m" if n = m * k for some integer k
n (-. |) n and 1 (-. |) n
"a is congruent to b modulo d" if a - b is divisible by d
if (a - b) | d and (x - y) | d then ((a + x) - b + y) | d 
if (a - b) | d and (x - y) | d then ((a * x) - b * y) | d

Rational Numbers
fractions: mrn with m , n integer numerals and -. n = 0 e.g. 0r1 _2r3 3r4 ...
dividing by zero does not give meaningful information
rational number line
(m % n) = s % t if *. / (-. 0 = n , t) , (m * t) = n * s
m = m % 1
(-. 0 = a , n) implies (m % n) = (a * m) % a * n  cancellation rule
(- m % n) = (- m) % n
(- m % n) = m % - n
(*. / (Q. r) , 0 < r) iff *. / (r = n % m) , (Z. , 0 <) n , m
"d is a common divisor of a and b" if d divides both a and b
the lowest form of a is mrn where 1 is the only common divisor of m and n
every positive rational has a lowest form
if -. n = 1 and the only common divisor of m and n is 1 then mrn = m % n
((a % d) + b % d) = (a + b) % d
((m % n) + a % b) = ((m * b) + a * n) % n * b
(0 = 0 % 1) and 0 = 0 % n
(a = 0 + a) and a = a + 0
negative rational numbers have the form _mrn
_mrn = - mrn and mrn = - _mrn
rational addition is commutative and associative
((m % n) * a % b) = (m * a) % n * b
((m % n) ^ k) = (m ^ k) % n ^ k
(Q. r) <: -. 2 = r ^ 2
a real number that is not rational is called irrational
rational * is associative, commutative, and distributes over +
(Q. r) <: (a = 1 * a) *. 0 = 0 * a
! = (* / 1 + i.) i.e. (! n) = 1 * 2 * 3 * ... * n
! = ] * (! <:) i.e. (! 1 + n) = (1 + n) * ! n
(n ! m) = (! n + m) % (! n) * ! m   binomial coefficients
(n ! m) = ((! + /) % (* / !)) n , m   multinomial coefficients
(n ! m) = m ! n
(n ! m + 1) = (n ! m) + (n - 1) ! m
decimals

Multiplicative Inverses
(*. / (Q. a) , -. a = 0) implies *. / (Q. b) , 1 = a * b
"b is a multiplicative inverse of a" if *. / 1 = a (* ~ , *) b
(b = c) if *. / (-. 0 = a) , 1 = (a * b) , (b * a) , (a * c) , c * a
(-. 0 = a) implies *. / (1 = a * % a) , 1 = (% a) * a
(-. 0 = a =: n % m) implies *. / ((% a) = m % n) , (% a) = (n % m) ^ _1
(1 = a * b) implies b = a ^ _1
(0 = a * b) implies +. / 0 = a , b
((a % b) = c % d) if *. / (-. 0 = b , d) , (a * d) = b * c
(b = c) if *. / (-. 0 = a) , (a * b) = a * c   times cancellation law
(*. / -. 0 = b , c) implies ((a * b) % a * c) = b % c  quotient cancellation law
((a % b) + c % d) = ((a * d) + b * c) % b * d
((x ^ n) - 1) % x - 1) = (x ^ n - 1) + (x ^ n - 2) + ... + x + 1
if n is odd then (((x ^ n) + 1) % x + 1) = - ` + / x ^ n - 1 + i. n

Linear Equations

Equations In Two Unknowns
assuming c = (a * x) + b * y and u = (v * x) + w * y yields 
 x = ((w * c) - u * b) % (w * a) - v * b
 y = ((v * c) - w * u) % (v * b) - w * a
elimination method: common multiples

Equations In Three Unknowns
iterate elimination method

Real Numbers

Addition And Multiplication
the real number line
addition of real numbers is commutative, associative, a = 0 + a , 0 = a + - a
(0 = a + b) implies b = - a  unique additive inverse
* is commutative,associative,distributes over +, a = 1 * a, 0 = 0 * a
((a + b) ^ 2) = (a ^ 2) + (2 * a * b) + b ^ 2
((a - b) ^ 2) = (a ^ 2) - (2 * a * b) + b ^ 2
((a + b) * a - b) = (a ^ 2) - b ^ 2
every nonzero real number has a unique multiplicative inverse
the E , I system satisfies the addition and multiplication properties

Real Numbers: Positivity
positivity as being on a side of 0 on the number line
a > 0 means "a is positive"
(*. / 0 < a , b) implies *. / 0 < (a * b) , a + b
(*. / 0 < a) implies (*. / 0 < * / , + /) a
~: / (0 = a) , (0 < a) , 0 > - a
a < 0 means -. *. / (0 = a) , (- a) > 0
"a is negative" means a<0
(a < 0) iff 0 < - a
(0 < 1) and 0 > _1
every positive integer is positive
(0 > a * b) if (0 < a) and 0 > b
(0 > a * b) if (0 > a) and 0 < b
(0 < a) implies 0 < 1 % a
(0 > a) implies 0 > 1 % a
assume completeness: (a > 0) implies *. / (0 < %: a) , a = (%: a) ^ 2
"the square root of a" means %: a
an irrational number is a real number that is not rational e.g. %: 2
Assuming *. / a = *: b , x yields
 0 = - / *: b , x
 0 = x (+ * -) b
 +. / 0 = x (+ , -) b
 +. / x = (- , ]) b
((x ^ 2) = y ^ 2) implies (x = y) or x = - y
(| x) = %: *: x  absolute value
(% (%: x + h) + %: x) = ((%: x + h) - %: x) % h  rationalize 
0 < a ^ 2
(%: a % b) = (%: a) % %: b alternatively ((%: % /) = (% / %:)) a , b
(*. / (Q. x , y , z , w) , (N. *. 0 <) n) implies (
*. / (Q. c , d) , (c + (d * %: n)) = (x + y * %: n) * z + w * %: n
(| a - b) = | b - a

Powers And Roots
assume *. / (0 < a) , (N. , 0 < ) n implies a = (n %: a) ^ n for a unique 
n %: a
"the nth-root of a" means n %: a
(a ^ 1 % n) = n %: a
(0 < a , b) implies ((n %: a) * n %: b) = n %: a * b
fractional powers: *. / (Q. x) , 0 < a implies there exists a ^ x such that
((a ^ x) = a ^ n) if x = n
((a ^ x) = n %: a) if x = 1 % n
(a ^ x + y) = (a ^ x) * a ^ y
(a ^ x * y) = (a ^ x) ^ y
((a * b) ^ x) = (a ^ x) * b ^ x
*. / (1 = a ^ 0) , 1 = * / #: 0
(a ^ - x) = 1 % a ^ x
(a ^ m % n) = (a ^ m) ^ 1 % n
(a ^ m % n) = (a ^ 1 % n) ^ m

Inequalities
a < b means 0 < b - a
a < 0 means 0 < - a
a < b means b > a
inequalities on the numberline
a <: b means a < b or a = b
a >: b means a > b or a = b
(*. / (a < b) , b < c) implies a < c
(*. / (a < b) , 0 < c) implies (a * c) < b * c
(*. / (a < b) , c < 0) implies (b * c) < a * c
x is in the open interval a , b if (a < *. b >) x
x is in the closed interval a,b if (a <: *. b >:) x
x is in a clopen interval a,b if +. / ((a < *. b >:) , (a <: *. b 
>)) x
(a <),(a <:) , (a >) , a >:  infinite intervals
intervals and the numberline
(*. / (0 < a) , (a < b) , (0 < c) , c < d) implies (a * c) < b * 
d
(*. / (a < b) , (b < 0) , (c < d) , d < 0) implies (a * c) > b * 
d
(*. / (0 < x) , x < y) implies (1 % y) < 1 % x
(*. / (0 < b) , (0 < d) , (a % b) < c % d) implies (a * d) < b * c
(a < c) implies ((a + c) < b + c) and (a - c) < b - c
(*. / (0 < a) , a < b) implies (a ^ n) < b ^ n
(*. / (0 < a) , a < b) implies (a ^ 1 % n) < b ^ 1 % n
(*. / (0 < b , d) , (a % b) < c % d) implies ((a % b) < (a + c) % b + 
d)
(*. / (0 < b , d) , (a % b) < c % d) implies ((a + c) % b + d) < c % d)
(*. / (0 < b , d , r) , (a % b) < c % d) implies (
 (a % b) < (a + r * c) % b + r * d)
(*. / (0 < b , d , r) , (a % b) < c % d) implies (
 ((a + r * c) % b + r * d) < c % d)
(*. / (0 < b , d , r) , (r < s) , (a % b) < c % d) implies (
((a + r * c) % b + r * d) < (a + s * c) % b + s * d)

Quadratic Equations
((*. / 
 (-. a = 0) , 
 (0 = (a * x ^ 2) + (b * x) + c) , 
 (0 <: (b ^ 2) - 4 * a * c)) 
implies
+. / 
 (x = (- b + %: (b ^ 2) - 4 * a * c) % 2 * a) , 
 (x = (- b - %: (b ^ 2) - 4 * a * c) % 2 * a))
(0 > (b ^ 2) - 4 * a * c) implies -. *. / (R. x) , 0 = (a * x ^ 2) + (b * x) 
+ c

On Logic And Mathematical Expressions

Logic
proof as list of statements each either assumed or derived from a deduction rule
converse: the converse of "if A, then B" is "if B, then A"
"A iff B" means "if A, then B" and "if B, then A"
proof by contradiction: take A false, derive a contradiction, conclude A true
equations are not complete sentences
logical equivalence as A iff B

Sets And Elements
set: a collection of objects
element: an object in a set
subset: s0 is a subset of s1 if every element of s0 is an element of s1
empty set: a set that does not have any elements
set equality: s0 equals s1 if s0 is a subset of s1 and s1 is a subset of s0.

Indices
"let x,y be something" includes the possibility that x=y
"let x,y be distinct somethings" excludes the possibility that x=y
x0 x1 x2 x3 .. xn is a finite sequence

Distance And Angles

Distances
assume p0 d p1 gives the distance between the points p0 , p1
assume that for any points p0,p1,p2
0 <: p0 d p1   nonnegative
(0 = p0 d p1) iff p0 = p1   nondegenerate
(p0 d p1) = p1 d p0   symmetric
(p0 d p1) <: (p0 d p2) + p2 d p1   triangle inequality
note the geometric meaning of the triangle inequality
the length of a side of a triangle is at most the sum of the others
assume that two distinct points lie on one and only one line
 (-. p0 = p1) implies *. / (p0 p1 i p0 , p1),
 (*. / p2 p3 i p0 , p1) implies p2 p3 i = p0 p1 i
define betweenness as equality case of the triangle inequality
 (p0 p1 B p2) iff (p0 d p1) = (p0 d p2) + p1 d p2
define segment as the points between a pair of endpoints
 (p0 p1 W p2) iff p0 p1 B p2  (by definition of B we have p0 p1 i p2)
assume the length of a segment is the distance between its endpoints
 (mW p0 p1) = p0 d p1
assume rulers pick out unique points
 (*./(0<:a),a<:p0 d p1) implies *./(p0 p1 W p2),a=p0 d p2 for some p2
 ((*./(p0 p1 W),(= p0 d))p2,p3) implies p2=p3
define circle as the points equidistant from a common point
 (p0 p1 o p2) if (p0 d p1)=p0 d p2  geometric circle from metric
define (p0 r bdB) as the circle with center p0 and radius r
 (p0 r bdB p1) if r=p0 d p1  metric circle as boundary of a ball
prove two points uniquely define a circles
 (p0 p1 o p2) implies (p0 p1 o = p0 p2)
prove a point and radius uniquely define a circle
 (p0 r bdB p1) implies (p0 r bdB p2) iff p0 p1 o p2
define (p0 r clB p1) as the disc with center p0 and radius r
 (p0 r clB p1) if r>:p0 d p1

Angles
assume distinct points lie on a unique line
 (-.p0=p1) implies *./(p0 p1 i p0,p1),
 (*./p2 p3 i p0,p1) implies (p2 p3 i = p0 p1 i)
assume a pair of nonparallel lines share a unique point
 (-.p0 p1 p2 H p3) implies (p0 p1 i *. p2 p3 i)p4 for some p4
 (*./(-.p0 p1 p2 H p3),(p0 p1 i *. p2 p3 i)p4,p5) implies p4=p5
assume a point belongs to a unique parallel to a line
 p0 p1 p2 H p2
 (*./(p0 p1 p2 H p3),p2 p3 i p4) implies p0 p1 p2 H p4
 (*./p0 p1 p2 H p3,p4) implies (p2 p3 i = p2 p4 i)
assume "parallel to" is an equivalence relation
 p0 p1 p0 H p1
 (p0 p1 p2 H p3) implies p2 p3 p0 H p1
 (*./(p0 p1 p2 H p3),p0 p1 p4 H p5) implies p2 p3 p4 H p5
assume a point belongs to a unique perpendicular to a line
 (*./(p0 p1 p2 L p3),p2 p3 i p4) implies p0 p1 p2 L p4
 (*./p0 p1 p2 L p3,p4) implies (p2 p3 i = p2 p4 i)
assume a parallel to a perpendicular is perpendicular
 (*./(p0 p1 p2 L p3),p2 p3 p4 H p5) implies p0 p1 p4 L p5
assume a perpendicular to a perpendicular is parallel
 (*./(p0 p1 p2 L p3),p2 p3 p4 L p5) implies p0 p1 p4 H p5
define a halfline as points on the same side of a line relative to a vertex
 (p0 p1 R p2) if (p2 B p0 p1)+.p1 B p0 p2
assume a halfline is determined by its vertex and any other point on it
 ((p0 p1 R p2)*.-.p0=p2) implies p0 p1 R = p0 p2 R
define (p0 p1 R) as the halfline with vertex p0 to which p1 is incident
assume a pair of distinct points determine two distinct rays
 (-.p0=p1)<:p0 p1 R (-.=) p1 p0 R
assume a point on a line divides it into two distinct halflines
 (p0 p1 i p2)<: (p0 p1 R p2)+.(p0 p1 i p3) implies (p0 p1 R p3)+.p0 p2 R p3
assume two distinct halflines sharing a vertex separate the plane into two parts
define angle as one of the parts of the plane separated by such halflines
assume two points on a circle divide it into two distinct arcs
note Lang uses counterclockwise oriented angles rather than neutral angles
assume p0 p1 p2 c is the counterclockwise arc of (p1 p0 o) from p0 to (p1 p2 R)
define (p0 p1 p2 V) as the angle from p1 p0 R to p1 p2 R containing p0 p1 p2 c
define the vertex of (p0 p1 p2 V) as p1
define (p0 p1 p2 V) is a zero angle as (p1 p0 R = p1 p2 R)
define (p0 p1 p2 V) is a full angle as (p2 p1 p0 V) is a zero angle
note special notation to distinguish a full angle from a zero angle
define (p0 p1 p2 V) is a straight angle as (p0 p1 i p2)
prove if (p0 p1 p2 V) is a straight angle then so is (p2 p1 p0V)
define (p0 p1 p2 r clBV) as the sector of (p1 r clB) determined by (p0 p1 p2 V)
 (p0 p1 p2 r clBV p3) if (p1 r clB p3)*.(p0 p1 p2 V p3)
define mclB p0 r as the measure of the area of (p0 r clB)
define mclBV p0 p1 p2 r as the the measure of the area of (p0 p1 p2 r clBV)
define (mV p0 p1 p2) using the ratio (mclBV p0 p1 p2 r) to mclB p1 r
 (mV p0 p1 p2)=x deg if *./(0<:x),(x<:360),((mclBV p0 p1 p2 r)%mclB p0 
r)=x%360
define "x deg" as "x degrees"
prove the measure of a full angle is 360 deg
 (p0 p1 R p2) implies (360 deg)= mV p2 p1 p0
prove the measure of a zero angle is 0 deg
prove the measure of a straight angle is 180 deg
define a right angle as one whose measure is half a straight angle i.e. 90 deg
 (p0 p1 p2 V) is right iff 90=mV p0 p1 p2
assume the area of a disc of radius r is pi*r^2 where pi is near 3.14159
prove that the measure of an angle is independent of r

Pythagorean Theorem
define p W p0 as +. / 2 (p0 W ~) \ p
define noncolinear points p0,p1,p2 as -. p0 p1 i p2
define triangle as segments between three points
 (p0 p1 p2 A p3) if p0 p1 p2 p0 W p3
define the triangle with vertices p0 , p1 , p2 as (p0 p1 p2 A)
define the sides of (p0 p1 p2 A) as (p0 p1 W), (p1 p2 W), and (p2 p0 W)
define triangular region as the points bounded by and having a triangle
define area of a triangle as area of a triangular region
define mA p0 p1 p2 as the measure of the area of (p0 p1 p2 A)
note triangular regions are also called simplexes
note pairs of sides of a triangle determine angles
define a right triangle as one having a right angle
 (p0 p1 p1 p2 Z p3) if *./ (p0 p1 p2 A p3) , 90 = mV p1 p2 p0
define the legs of a right triangle as the sides of its right angle
define the hypotenuse of a right triangle as the non-leg side
assume right triangles with corresponding legs of equal length are congruent
 (*./(p0 p1 p2 Z),(p3 p4 p5 Z),((p1 d p2)=p4 d p5),(p2 d p0)=p5 d p3) implies
 *./((mV p0 p1 p2)=mV p3 p4 p5),((mV p1 p0 p2)=mV p4 p3 p5),
 ((p0 d p1)=p3 d p4),(mA p0 p1 p2)=mA p3 p4 p5
assume parallels perpendicular to parallels cut corresponding segments equally
 (*./(p0 p1 p2 H p3),(p0 p1 p0 L p2),p0 p1 p1 L p3) implies 
 *./((p0 d p1)=p2 d p3),(p1 d p2)= p3 d p0
define (0=mH p0 p1 p2 p3) if -.(p0 p1 p2 H p3)
define ((p0 d p1)=mH p2 p0 p3 p1) if p2 p0 p3 H p1
prove the distance between parallel lines is unique
(*./(p0 p1 p2 H p3, p4)(p2 p3 p3 L p5)(p0 p1 i p5,p6)p2 p4 p4 L p6)<:(p3 d 
p5)=p4 d p6
define rectangle as four sides: opposites parallel and adjacents perpendicular
 (p0 p1 p2 p3 Z p4) if 
 *. / (p0 p1 p2 H p3) , (p1 p2 p3 H p0) ,
 (p0 p1 p1 L p2) , (p1 p2 p2 L p3) , (p2 p3 p3 L p0) , (p3 p0 p0 L p1) ,
 p0 p1 p2 p3 p0 W p4
define (p0 p1 p2 p3 Z) as a rectangle with vertices p0 p1 p2 p3
prove the opposite sides of a rectangle have the same length
note area of a rectangle means area of region bounded and containing a rectangle
define (mZ p0 p1 p2 p3) as area of (p0 p1 p2 p3 Z)
define a square as a rectangle all of whose sides have the same length
prove the area of a square with side length a is a ^ 2
prove that (p0 p0 p1 p2 Z) uniquely determines (p3 p0 p1 p2 Z)
prove the sum of the non-right angles in a right triangle is 90 deg
 (p0 p0 p1 p2 Z) implies 90 = (mV p1 p0 p2) + mV p1 p2 p0
prove the sum of the angles in a right triangle is 180 deg
 (p0 p0 p1 p2 Z) implies 180 = (mV p0 p1 p2) + (mV p1 p2 p0) + mV p2 p0 p1
prove the area of a right triangle with leg lengths a,b is -: a * b
prove the Pythagorean theorem
 (p0 p1 p1 L p2) implies (*: p0 d p2) = + / *: (p0 d p1) , (p1 d p2)
prove a triangle is right iff it satisfies the pythagorean theorem
define the diagonals of (p0 p1 p2 p3 Z) as (p0 p2 W) and p1 p3 W
prove the lengths of the diagonals of a rectangle (and square) are the same
prove the length of the diagonal of a square with side length 1 is %: 2
prove a right triangle with legs of length 3,4 has hypotenuse of length 5
define perpendicular bisector as line perpendicular to segment through midpoint
 (p0 p1 t p2) if ((-: p0 d p1) = p0 d p3) implies +. / (p2 = p3) , p0 p3 p3 L p2
prove (p0 p1 t p2) iff (p0 d p2) = p1 d p2
prove the *: of the diagonal of a rectangular solid is + / *: of its sides
prove the area of a triangle with base length b and height h is -: b * h
prove the hypotenuse of a right triangle is greater than or equal to a leg
prove (*. / (p0 p1 p2 L p3) , (p0 p1 i p3 , p4)) implies (p2 d p3) <: p2 d p4
prove opposite interior angles are the same
prove corresponding angles are the same
prove opposite angles are the same
prove the perpendicular bisectors of the sides of a triangle meet at a point

Isometries

Some Standard Mappings Of The Plane
define p0 is mapped to p1 as (p0 ; p1)
note map is similar in meaning to association,function,verb,arrow
define map of the plane as associating each point of the plane with another
define the value of M0 at p0 or the image of p0 under M0 as (M0 p0)
define M0 maps p0 onto p1 as p1 = M0 p2
define (M0 = M1) as (M0 p0) = M1 p0 for all p0
define the p0 constant map as (p0 Mp)
 p0 = p0 Mp p1
note (p0 [) is the constant map
 p0 = (p0 [ p1)
define the identity map as ]
 p0 = ] p0
note ] is the identity map
 p0 = ] p0
define the reflection map about (p0 p1 i) as (p0 p1 Mt)
 p0 = p1 p2 Mt p3 if (p1 p2 i p4) iff p0 p3 t p4
define the reflection map about p0 as Mm
 (p0 = p1 Mm p2) if p0 p2 m p1
define the dilation about p0 of p1 to p2 as (p0 p1 p2 MH)
 (p0 = p1 p2 p3 MH p4) if
 (*. / (p3 p1 p1 L p5,p6)(p1 p2 o p5)(p1 p3 o p6))<:(p3 p5 p6 H p4)*.p0 p3 i 
p4
define dilation by r0 about p0 as (p0 r0 IH)
 (p0 = p1 r0 IH p2) if (p1 d p2)=r*p1 d p0
define the counterclockwise rotation about p1 by (p0 p1 p2 V) as (p0 p1 p2 Mo)
 (p0 = p1 p2 p3 Mo p4) if 
 (*./(p2 p4 o p5)(p2 p1 i p5)(p2 p3 i p6)(p2 p6 p6 L p1)p5 p6 Ed p4 p7)<:
 (p2 p4 o p0)*.p2 p7 i p0
note the rotation map defined assumes acute angles
define the counterclockwise rotation about p0 by r0 degrees as (p0 r0 Io)
 (p0 = p1 r0 Io p2) if *./(0<:r0)(r0<:360)r0=mV p2 p1 p0
note 0<:r0 implies (p0 r0 Io) is c.c. and r0<:0 implies (p0 r0 Io) is 
clockwise
prove p0 180 Io = p0 Mm
prove p0 180 Io = p0 _180 Io
prove (p0 0 Io = ])
prove (p0 360 Io = ])
note rotation by 0 or 360 degrees is the identity transformation
define (p0 r0 oV) as (p0 r1 oV) with *./(0<:r1),(r1<360),r0=r1+360*n for 
some n
prove rotation by a negative angle is rotation by a positive angle
define the arrow from p0 to p1 as a0 =: p0 ; p1
 ((p0 S a0) *. p0 T a0) if a0 = p0 ; p1
define p0 is an object of a0 if p0 S a0 or p0 T a0
 (p0 O a0) if (p0 S a0) +. p0 T a0
note, in general, a0;a1 is an arrow with objects a0,a1, source a0 and target a1
 *. / ((a0 , a1) O a0 ; a1) , (a0 S a0 ; a1) , a1 T a0 ; a1
define p0 p1 W as the directed line segment associated with the arrow p0;p1
 (p0 p1 W = p1 p0 W) iff p0 = p1
define translation by (p0 p1 W) as (p0 p1 MW)
 (p0 = p1 p2 MW p3) if
 ((p1 p3 p3 L p0) *. p1 p3 p2 H p0) +.
 *. / ((p1 p2 i p3)(-.p1 p2 i p4)(p1 p4 p2 H p5)p4 p5 p1 H p2)<:p0=p4 p5 MW 
p3
define p0 is a fixed point of M0 if p0 = M p0
prove that every point is a fixed point of ]
prove that p0 is the only fixed point of p0 Mp
prove p0 is the only fixed point of p0 Mm
prove p0 is a fixed point of (p1 p2 Mt) iff (p1 p2 i p0)
prove (-. 0 = mV p0 p1 p2) implies p1 is the only fixed point of p0 p1 p2 MV
prove (-. 0 = r0) implies p0 is the only fixed point of p0 r0 IV
prove that (-. p0 = p1) implies (p0 p1 MW) has no fixed points
prove if -. 1 = r0 implies p0 is the only fixed point of p0 r0 IH
prove every point is a fixed point of p0 1 IH

Isometries
define M0 is an isometry if it preserves distance: (d=d I0)
 (p0 d p1) = (I0 p0) d I0 p1
prove isometries map distinct points to distinct points
 (-. p0 = p1) implies -. (M0 p0) = M0 p1
define y is in the image of A under M0 if y = M0 x for some x in A
assume point and line reflects, rotations, and translations are isometries
prove isometries of points are points
prove isometries of line segments are line segments
prove isometries of lines are lines
prove isometries of circles are circles
prove isometries of discs are discs
prove isometries of circular arcs are circular arcs
prove if -. p0 = p1 fixed points of an isometry then so are points on p0 p1 i
prove an isometry wit three fixed points is the identity
prove (p0 1 IH) and (p0 _1 IH) are isometries (the only of the family IH)
prove isometries of parallel lines are parallel
prove isometries of perpendiculars are perpendicular
note isometries in 3 space

Composition of isometries
define the composition of M0 with M1, M1 followed by M0, as (M1 M0)
 (p0 = (M0 M1) p1) if (p2 = M1 p1) implies p0 = M0 p2
prove if M0 is an isometry then M0 = (] M0) and M0 = (M0 ])
prove the composition of two (p0 180 Io) is ]
prove the composition of isometries is an isometry
prove the composition of rotations about a point is a rotation about that point
 p0 (r0 + r1) Io = (p0 r1 Io p0 r0 Io)
prove that the composition of translations is a translation
 p0 p2 MW = (p1 p2 MW p0 p1 MW)
prove the composition of dilations about a point is a dilation about that point
 p0 (r0 * r1) IH = (p0 r1 IH p0 r0 IH)
prove the composition of isometries is associative (arrows in general)
define (M0 ^: 2) as (M0 M0)
define (M0 ^: 3) as (M0 M0 M0)
define (M0 ^: 1 + n) as (M0 M0^:n)
define (M0 ^: 0) as ] and (M0^:1) as M0
prove MI = (p0 Mm) ^: 2
prove MI = (p0 Mm) ^: 2 * n
prove (p0 Mm) = (p0 Mm) ^: 1 + 2 * n
prove (M0 ^: n0 + n1) = (M0 ^: n0 M0 ^: n1)
prove if M0 is a reflection through a line then (M0 ^: 2) is MI
note not all isometries commute

Inverse Isometries
define M0 as the inverse of M1 if (] = (M0 M1)) and (] = (M1 M0))
prove the inverse of a map is unique if it has one
define (M0 ^: _1) as the inverse of M0 if it has one
note (y = M0 x) iff (x = (M0 ^: _1)y) or ([ = (M0 ])) = (] = ((M0 ^: _1) [))
prove reflections are their own inverses
prove identity is its own inverse
prove ] = (p0 p1 MW p1 p0 MW) and ] = (p1 p0 MW p0 p1 MW)
prove (p0 p1 MW) and (p1 p0 MW) are inverses of each other
prove ] = (p0 r0 Io p0 -r0 Io) and ] = (p0 -r0 Io p0 r0 Io)
prove (p0 r0 Io) and (p0 -r0 Io) are inverses of each other
 (p0 -r0 Io) = (p0 r0 Io) ^: _1
prove ((M0 M1) ^: _1) = (M1 ^: _1 M0 ^: _1)
define M0 ^: _n0 as (M0 ^: _1) ^: n0
prove (M0 ^: n0 + n1) = (M0 ^: n0) M0 ^: n1
prove if M0,M1 are isometries *./(M0=M1)p0,p1,p2 then (M0=M1) if M0^:_1 exists
prove every isometry actually does have an inverse
prove reflections about perpendicular lines commute
prove M0 , M1 , M2 isometries (M0 M1) = (M0 M2) implies M1 = M2
note symmetries of the square via isometries
note symmetries of the triangle via isometries
note symmetries of the hexagon via isometries
note do these isometric symmetries characterize these shapes?

Characterization Of Isometries
prove -. p0 = p1 fixed points of isometry M0 implies +. / (MI = M0) , p0 p1 MT 
= M0
prove an isometry with only one fixed point is +. / Mo , Mo MT
prove an isometry without a fixed point is +. / MW , (MW Mo) , ((MW Mo) Mm)

Congruences
define p00,p01,..,p0n is congruent to p10,p11,..,p1m if p00,..,p0n=M0 p11,..,p0m
note if one set is the image of another under an isometry then they're congruent
prove circles with the same radius are congruent
prove discs with the same radius are congruent
prove segments with the same length are congruent
prove right triangles whose corresponding legs are congruent are congruent
prove triangles whose corresponding sides are congruent are congruent
prove squares whose sides are congruent are congruent
prove rectangles whose corresponding sides are congruent are congruent
assume the area of a region is equal to the area of its image under an isometry
prove congruence is an equivalence relation
prove any two lines are congruent
prove the sides of a triangle with angle measures 60 deg have equal length
define equilateral triangle if its sides are all the same length
prove SAS characterization of congruence
prove AAS characterization of congruence
prove inscribed circle in a triangle angle bisectors

Area And Applications

Area Of A Disc Of Radius r
note a unit length determines a unit area
assume area of a square with side length a is a^2
assume area of a rectangle with side lengths a,b is a*b
prove the area of the dilation by r of a square of area a is a*r^2
assume the area of the dilation by r of a region with area a is a*r^2
define o.1 as the length of of a circle with radius 1
prove the area of the dilation by r of a disc of radius 1 is o.-:r^2
note approximate regions with squares to find their area
note upper/lower bounds as areas inside and outside of figure
define ellipse as nonuniform scaling of a disc
prove map circle to ellipse algebraically
note scaling and volume in 3-space is similar

Circumference Of A Circle Of Radius r
assume ((o. 1) = mbdB p0 1) and (o. r) = mbdB p0 r
note approximate by dividing disc into n sectors with angles 360%n
note disc area to circle length
prove the length of the dilation by r of a segment of length a is r*a
assume the length of the dilation by r of an arbitrary curve of length a is r*a

Coordinates And Geometry

Coordinate Systems
define an origin as the intersection of perpendicular lines (called axis)
note the classical origin is the intersection of a horizontal and vertical line
note pick unit length, cut axes into segments left/right up/down
note cut plane into squares with unit side lengths
note label each point of intersection with a pair of integers
note intersection of perpendicular lines to axes through a point gives its 
coordinate
define the coordinate of the origin as 0,0
note meaning of the positive/negative components as motions
define x-coordinate is usually the first, y-coordinate is usually the second
prove the axes divide the plane into four quadrants
define the positive side of the second axis as counterclockwise the first
note plot points
assume/prove every point corresponds to a unique pair of numbers
assume/prove every pair of numbers corresponds to a unique point
note points in 3-space

Distance Between Points
points on the number line are labeled so that algebraic definitions work simply
note the distance between points in the plane is found using the pythagorean 
theorem
prove the distance between points p0 and p1 on a number line is %:(p0-p1)^2
 (*./(p0=a0,b0),p1=a1,b1) implies (p0 d p1)=%:@+/@*:(a1-a0),(b1-b0)
assume distance as d=:%:@+/@*:- satisfies the required geometric properties
define the plane as all pairs of real numbers with distance %:@+/@*:-
prove (0 = p0 d p1) iff p0 = p1
define dilation as * i.e. (r * x , y) = (r * x) , r * y
prove (0 <: r) implies (d r * x , y) = r * d x , y 
prove ((r * [) d r * ]) = r * d
prove distance works in 3-space

Equation Of A Circle
assume (p0 p1 o p2) iff (p0 d p1) = p0 d p2
assume p0 r0 bdB p1 if r0 = p0 d p1
define p0 r0 bdB as the circle centered at p0 with radius r0
prove ((p0=:r0,r1) r2 bdB p1=:r3,r4) iff (*:r0)=+/*:p0-p1
prove is the equation of a circle in r3,r4 with center r1,r2 and radius r0 is
 (*: r0) = + / *: (r1 , r2) - r3 , r4
prove (p0 r0 bdB p1) iff (*: r0) = + / *: p0 - p1

Rational Points On A Circle
prove ((*:c)=+/*:a,b) iff (1=+/*:(a,b)%c) iff 1=+/*:(x=:a%c),(y=:b%c) when -.c=0
note to solve (*:c)=+/*:a," for integers a,b,c solve 1=+/*:x,y for rationals x,y
define a rational point as one whose components are rational numbers
prove (*./(t=:y%1+x),(1=+/*:x,y),-._1=x) <: *./x=((1- % 1+)*:t),y=(2* 
%(1+*:))t
prove 1=+/*:x,y rational <: *./x=(1- % 1+)*:t),y=((2*)%(1+*:))t for rational 
t
prove distinct rationals give distinct solutions
 (*./(0<:s),s<t) implies </((1-)%(1+))*:s,t

Operations On Points

Dilations And Reflections
assume (r0*r1,r2)=(r0*r1),r0*r2
prove (p0= p1 r0 IH p2) iff (p0=p1+r0*p2-p1) or (p0=(r0*p2)+(1-r0)*p1)
prove (p0= p1 Mm p2) iff (p0=p1-p2-p1) or (p0=(+:p1)-p2)
prove ((r0*r1)d r0*r2)=(|r0)* r1 d r2
note the n-dimensional case

Addition Subtraction And The Parallelogram Law
assume ((a0,a1)+b0,b1)=(a0+b0),a1+b1
prove commutativity (p0+p1)=p1+p0
prove associativity: (p0+p1+p2)=(p0+p1)+p2
prove 0,0 is an additive identity: (p0=p0+0,0) and p0=(0,0)+p0
prove additive inverses exist: ((0,0)=p0+-p0) and (0,0)=(-p0)+p0
prove the points (0,0);p0;p1;p0+p1 are vertices of a parallelogram
 (0,0),p0,p1,:p0+p1 W is a parallelogram
prove p0=(p0-p1)+p1
prove (0,0);p0;p1;p0-p1 are vertices of a parallelogram
prove (p0=p1 p2 MW p3) iff (p0=p1+(p2-p1)+p3-p1) or p0=p3+p2-p1
define norm p0 as (0,0) d p0
 norm =:(0,0) d
prove (p0 d p1)=norm p0-p1
prove (p0 d p1)=norm p1-p0
prove M0 is an isometry iff (norm p0-p1)=norm (M0 p0)-M0 p1
prove (p0 r0 bdB p1) iff (p1=(0,0) p0 MW p2) for some p2 with r0=norm p1 p2
prove every circle is the translation of a circle about the origin
 (p0 r0 bdB p1) iff (p1=(0,0) p0 MW p2) for some p2 with (0,0) r0 bdB p2
prove associativity: (r0*r1*p0)=(r0*r1)*p0
prove distributivity: (r0*p0+p1)=(r0*p0)+r0*p0
prove identity: p0=1*p0
prove annihilator: (0,0)=0*p0
prove translation is an isometry
 (p0 d p1)=(p2 p3 MW p0) d p2 p3 MW p1
prove a reflection through the origin followed by a translation is a 
point-reflection
 (p0 p1 MW (0,0) Mm)= p2 Mm for some p2
prove a dilation through the origin followed by a translation is a 
point-dilation
 (p0 p1 MW (0,0) r0 MH)= p2 r1  MH for some p2 and r1
prove the reflection of a circle through a point is a circle
for some p4,p5 (*./(p0=p1 Mm p2),p3 p4 o p2) iff (p4 p5 o p0)
prove the dilation of a circle through a point is a circle
prove ((]=(M0 p0 p1 MW) and ]=p0 p1 MW M0) iff (M0 p2)=p0+(p0-p1)+p2-p0
prove the inverse of a translation is a translation
prove ((]=M0 p0 r0 IH) and ]=p0 r IH M0) iff (M0 p1)=p0+(%r)*p1-p0
prove the inverse of a dilation is a dilation
prove (p0 = p1 p2 MW p0) iff (p0=p0+p2-p1) iff ((0,0)=p2-p1) iff p1=p2
prove translation doesn't have fixed points unless it is the identity
prove the fixed points of a transformation via its coordinate definition
prove (*./(p0=a0,a1),(e0=1,0),e1=0,1) implies p0=(a0*e0)+a1*e1
prove p0,(p0+r*e0),(p0+r*e1),:(p0+(r*e0)+r*e1) W is a rectangle

Segments, Rays, And Lines

Segments
prove (p0 p1 W p2) iff *./(p2=p0+(p1-p0)*t),(0<:t),t<:1
prove the point halfway between p0 and p0+p1 is p0+-:p1
prove every segment is a translation of a segment from the origin
prove every segment is a translation of a dilation of a unit segment from the 
origin
prove (p0 p1 W p2) iff *./(p2=((1-t)*p0)+t*p1),(0<:t),t<:1
assume (p0 p1 W) is a directed segment ordered by ((1-t)*p0)+t*p1 with 0<:t 
and t<:1
note p0 p1 W is also called a located vector
define the source of p0 p1 W as p0
define the target of p0 p1 W as p1
note p0 p1 W is also said to be located at p0
prove (p0 p1 MW = p1 p0 MW) iff p0=p1
note a point can be represented as an arrow whose source and target are equal

Rays
define the ray with vertex p0 in the direction of (0,0) p1 W as p0 (p0 + p1) R
prove p0 p1 R p2 iff *. / (p2 = p0 + t * p1 - p0) , (R. *. 0 <:) t for some t
prove p0 p1 R = p0 (p1 - p0) R
prove (R. *. 0 <)t implies p0 p1 R = p0 (t * p1) R
define p0 p1 R has the same direction as p2 p3 R if 
 *. / ((R. *. 0 <:) t) , (p1 - p0) = t * p3 - p2 
note this induces a sensed parallel axiom
note multidimensional forms

Lines
define p0 p1 W is parallel to p2 p3 W if *. / (R. t) , (p1 - p0) = r * p3 - p2
prove parallelism in this way is an equivalence relation
define p0 parallel to p1 if *. / (-. 0 = p0 , p1) , (R. t) , p0 = t * p1 for 
some t
prove a located vector belongs to a unique line
 p0 p1 W p2 implies p0 p1 i p2
prove (-.p0=0,0) implies ((0,0),:p0 i p1) iff p1=t*p0 for some t
note the line passing through p0 parallel to (0,0) p1 W is all points p0+t*p1 
for some t
prove p0 p1 i p2 iff p2=p0+t*p1 for some t
note p0+t*p1 is called a parametric representation of the line i p0 (p0+p1)
note in N the parametric representation is actually p0 + p1 *
note t is called a parameter in p0+t*p1
note the following argument in N
 p0 =: a0 , a1   p0 is the ordered pair a0,a1
 p1 =: b0 , b1   p1 is the ordered pair b0,b1
 p =: p0 + p1 *   parametric description of the line through p0 parallel to p1
 x =: 0 { p   zeroth coordinate of p
 y =: 1 { p   first coordinate of p
 p = (x , y)
 x = a0 + b0 *
 y = a1 + b1 *
 xaxis =: 0 , ~
 p = xaxis x  suppose p is equal to a point on the xaxis
 (x , y) = 0 , ~ x p = (x , y) and (x , 0) = xaxis x
 (x = x) *. 0 = y   pairs are equal iff their components are
 x = x   this is always true, so we don't get any new information
 0 = y   thus (p=xaxis x) iff (0=y)
 (0 = y) t   does there exist t such that 1=((0=y)t) ?
 (0 = a1 + b1 *) t
 (0 t) = (a1 + b1 *) t
 0 = a1 + b1 * t
 t =: b1 % ~ s
 0 = a1 + b1 * b1 % ~ s
 0 = (a1 +) ] s   by algebra 1=]*(%]) or (-.0=[)<: ]=[ * ] % [
 0 = a1 + s
 s =: (- a1) + u
 0 = a1 + (- a1) + u
 0 = ] u
 0 = u
 t = b1 % ~ (- a1) + 0
 t = b1 % ~ (- a1)
 t = (- a1) % b1
 t = - a1 % b1
   p - a1 % b1   yields a point on the x-axis, it is unique (by other arguments)
note mW O p0 can be used to represent the magnitude of a velocity (speed)
note when do two parametrically described lines intersect?
prove when a line crosses a circle
for what x and y does (p=(x,y))*.(*:r)=(+/(*:x,y))
prove if *./-.O=A,B  then A=:a0,a1 is parallel to B=:b0,b1 iff 0=(a0*b1)-a1*b0
prove if two lines are not parallel then they have exactly one point in common
prove if P=:p,q and (*:r)>:+/*:P then P+A* intersects (*:r)=(+/(*:(0 1{))) 
twice
prove if A=:a0,a1 and B=:b0,b1 then (x,y)=(A +)(B *) iff x=a0 + b0 * and y=a1 + 
b1 *

Ordinary Equation For A Line
prove (x , y) = ((a0 , a1) +) ((b0 , b1) *) then
 x = a0 + b0 *
 y = a1 + b1 *
 ]
 (b % ~) (b *)
 ((b % ~) ]) (b *)
 ((b % ~) (a - ~ a +)) (b *)
 (b % ~) ((a - ~ a +) (b *))
 (b % ~) (a - ~ ((a +) (b *)))
 (b % ~) (a - ~) x
 NB. alternatively (and going along the classical route)
 (a0 , a1) + (b0 , b1) * t
 (a0 , a1) + (b0 * t) , (b1 * t)
 (x =: a0 + b0 * t) , (y =: a1 + b1 * t)
 t
 t * 1
 t * (b0 % b0)
 (t * b0) % b0
 (b0 * t) % b0
 (0 + b0 * t) % b0
 ((- a0) + a0 + b0 * t) % b0
 ((- a0) + x) % b0
 (x - a0) % b0
 t = (x - a0) % b0
 t = (y - a1) % b1  NB. By a similar argument.
prove the ordinary tacit form has x,y on the right
 (x , y) = (A +) (B *) 
 ]
 (B % ~) (B *)
 (B % ~ A - ~ A + B *)
 (B % ~ A - ~) (x , y)
 ] = (b0 % ~ a0 - ~) x
 ] = (b1 % ~ a1 - ~) y
 ((b0 % ~ a0 - ~) x) = ((b1 % ~ a1 - ~) y)
 y = (a1 + b1 * b0 % ~ a0 - ~) x

Trigonometry

Radian Measure
define x=mV p0 p1 p2 if *./(0<:x),(x<:o.1),(x%o.1)=(mclBV p1 1 p0 
p2)%(mclB p1 1)
prove if x=mV p0 p1 p2 then (mclB p1 1)=o.1r2 implies x=mclBV p1 1 p0 p2
prove (deg x)=((o.1)%180)*(rad x)
note from now on: radians only
prove (x%o.1)=(mbdBV p0 1 p1 p2)%(mbdB p0 1)
if x>:o.2 then "x rad" means "w rad" with *./(0<:w),(w<o.2),(x=w+n*o.2)
if *./(0<z),(x=-z) then (rad x) means "w rad" with 
*./(0<:w),(w<o.2),(z=(n*o.2)-w)

Sine And Cosine
if *. / (O p2 K p3) , (-. p3 = O) , (p3 = (a , b)) then "sine V p3 O (1,0)" is 
b % r =: %: + / *: a , b
"cosine V p3 O (1,0)" is a%r
sine and cosine are independent of the point p3 (prove)
geometrically this means that any two such triangles are similar
if O 1 bdB p3=:a,b then (sine V p3 O (1,0))=b and (cosine V p3 O (1,0))=a
for O 1 bdB p3=:(a,b) define (sine mV p3 O (1,0))=b and (cosine mV p3 O (1,0))=a
the sign of sine and cosine depending on the quadrant its relevant angle 
occupies
Q1:+,+ Q2:-,+ Q3:-,- Q4:+,-
if (LA p0 p1 p2) then (sin V p1 p0 p2)=(d p1 p2)%(d p0 p1)
if (LA p0 p1 p2) then (cos V p1 p0 p2)=(d p0 p2)%(d p0 p1)
"sin x" is (sine rad x)
"cos x" is (cosine rad x)
from the definition of rad (for an arbitrary angle) (sin x)=sin x+n*o.2
(cos x) = cos x + n * o. 1
using plane geometry and the Pythagorean theorem:
=======================
x      sin x    cos x
-----------------------
o.1r6  1r2      (%:3)%2
o.1r41 %%:21    %%:2
o.1r3  (%:3)%2  1r2
o.1r2  1        0
o.1    0        _1
o.2    0        1
=======================
consider 1,1,%:2 and 1,(%:3),2 right triangles (and their angles)
reflect o.1r6, o.1r3, o.1 over longest leg and compute
if 1=$x then 1=+/*:(sin,cos)x since
 1
 (*: r) % *: r
 ((*: a) + *: b) % *: r
 ((*: a)% *: r) + (*: b) % *: r
 (*: a % r) + *: b % r
 + / *: ((a % r) , b % r)
 + / *: (sin x) , cos x
 + / *: (sin , cos) x
(cos x) = sin x + o. 1r2 and (sin x) = cos x - o. 1r2
(sin - x) = - sin x and (cos x) = cos - x
determine a distance using small angle measurements and a known length
polar coordinates
 r = %: + / *: x , y
 V =: mV (x , y) O (1 , 0)
 (x % r) = cos V
 (y % r) = sin V

The Graphs
plot ] , sin

The Tangent
tan =: sin % cos
tan only gives relevant information when -.0=cos
if *. / (O p2 K p3) , (-. p3 = O) , (p3 = a , b) then (b % a) = tan mV p3 O p2
tangent of the angle made by a line crossing the x-axis is the lines slope
 plot ],tan
we only plot tables of values
cot=: % tan 
sec=: % cos 
cosec =: % sin
1 = - / *: (tan , sec) x
1 = - / *: (csc , cot) x

Addition Formulas
(sin x + y) = ((sin x) * cos y) + (cos x) * sin y
(cos x + y) = ((cos x) * sin y) - (sin x) * sin y
(sin x - y) = ((sin x) * cos y) - (cos x) * sin y
(cos x - y) = ((cos x) * sin y) + (sin x) * sin y
(sin +: x) = +: * / (sin , cos) x
(cos +: x) = - / *: (cos , sin) x
(*: cos x) = (1 + cos +: x) % 2 or (+: *: cos x) = 1 + cos +: x
(*: sin x) = (1 - cos +: x) % 2 or (+: *: sin x) = 1 - cos +: x
(* / sin (m , n) * x) = -: - / cos (m (- , +) n) * x
(((sin m *) * (cos n *)) x) = -: + / sin (m (+ , -) n) * x
(* / cos (m , n) * x) = -: - / cos (m (+ , -) n) * x

Rotations
Since (r , V + x) = O x oV r , V then
 x0 = r * cos V
 y0 = r * sin V
 x1 = r * cos V + x
 x1 = r * ((cos V) * cos x) - (sin V) * sin x
 y1 = r * sin V + x
 y1 = r * ((sin V) * cos x) + (sin V) * cos x
 x1 = ((cos V) * x0) - (sin V) * y0
 y1 = ((sin V) * x0) + (cos V) * y0
the rotation matrix for x is 2 2 $ (cos , (- sin) , sin , cos) x
dilation matrix compositions of actions as multiplications of matrices

Some Analytic Geometry

The Straight Line Again
the plot of points for which c = F yields 1 is called the graph of F
an arbitrary point on the graph of ]=a* has the form (1 , a) *
a point on the graph of ] = (- ]) is of the form (1 , -1) *
the graph of [ = (b + a *) is a straight line parallel to the graph of [ = a * ]
 y1 =: y - b so y1 = a * x with points of the form (x , a * x) and [ = (b + a 
*) are (] , (b + a *))
the slope of a line that is the graph of [ = (b + a * ]) is a
*. / (y0 = b + a * x0) , y1 = b + a * x1 implies *. / ((y1 - y0) = a * x1 - x0) 
, a = (y1 - y0) % x1 - x0
(a = (y - y0) % x - x0) iff ((y - y0) % x - x0) = (y1 - y0) % x1 - x0
0 = c + (a * x) + b * y  equation of a line

The Parabola
(y - b) = c * (x - a) ^ 2 is called a parabola

The Ellipse
((a , b) *) shear dilation
1 = + / *: (u % a) , (v % b) is an ellipse

The Hyperbola
c = x * y is a hyperbola

Rotation Of Hyperbolas
c = - / *: y , xNotes on Constructive Mathematics by Errett Bishop, Douglas 
Bridges

A regular sequence is a Cauchy sequence whose modulus of convergence is the 
identity function.
Def. x is regular if (| (x n) - x m) <: (% n) + % m
Def. x eq y if (| (x - y) n) <: +: % n 
Lem. x eq y iff there exists N such that (N j) <: n and (| (x - y) n) <: 
% j
Prf. Let x eq y
(| (x - y) n) <: +: % n 
(% -: n) = +: % n
N =: +:
(N j) <: n
(N j) = +: j
(+: j) <: n
j <: -: n
(% -: n) <: % j
(| (x - y) n) <: % j
Therefore
N =: +:
(N j) <: n
(| (x - y) n) <: % j
Assume ((N j) <: n) for some N
(| (x - y) n) <: % j
Pick m so that (m >: j >. N j) then
j <: m
% m <: % j
(| (x - y) n) <: A =: + / @ | ((x n) - x m) ,((x - y) m) , (y m) - y n
A <: B =: + / ((% n) + % m) , (% j) , (% n) + % m
B <: (+: % n) + 3 * % j
(| (x - y) n) <: (+: % n) + 3 * % j  
(| (x - y) n) <: +: % n   NB. needs proof
Therefore x eq y .
Prp. eq is an equivalence relation
Prf. Let x be regular.
(| (x n) - x m) <: (% n) + % m
m =: n
(| (x n) - x n)) <: (% n) + % n
((x - x) n) = (x n)- x n
(+: % n) = (% n) + % n
(| (x - x) n) <: +: % n
Therefore x eq x .
Let x eq y
(| (x - y) n) <: +: % n
(| (x - y)) = | (y - x)
(| (y - x) n) <: +: % n
Therefore y eq x .
Let x eq y and y eq z
(| (x - y) n) <: +: % n
(| (y - z) n) <: +: % n
(| (x - z) n) <: A =: (| (x - y) n) + | (y - z) n
A <: B =: (+: % n) + +: % n
B = +: +: % n
(| (x - z) n) <: +: +: % n
N =: +: +:
(N j) <: n
(| (x - z) n) <: % j
Therefore x eq z and eq is an equivalence relation.





20151003T2147 Some Silly Notational Experiments

-`1
1`-

1.-
-.1

+.=.-
<=>
+`3

1-`

-1 3

-`1

3+ -`1 3

-.1 


 0<

-(1 2 3)

[* + +]
[f g h]

[x+y]

{x; y; z}


0; 2; 1;

0 1 2; 3 4 5; 6 7 8

0 1 2, 3 4 5, 6 7 8

(0;1;2)


(
0;
1;
2;
)

(
0 1 2;
3 4 5;
6 7 8;
)

(0; 1; 2)
(0 1 2);

0; 1; 2; 3

f.g.h 

?1

?2

1@3@4
1,3,4,5

f.g.h
1.g
g.1

g&1
g`1

1`g

f`g`h

f"g"h

f'g'h

+`-

(f)g(h)

-(1) 

(f g h)

f.g h
f g h
x f g h y

(+)=(-)
(=)+(!)
(=+!)
!+%

f()
f.
+.=.0

x.1 x.2

[x.1 , x.2]

[x.0 , x.1]

L

L R

[f g h]

f[x]

[x f 3]

y f x

y + x
y = x

[3 f]

[-1]
[x-1]3
3-1
2

f[x-1]
3 f[x-1] 4
f(4-1)
f(3)

[f [x-1]]
[f x-1]
[f x]
[y f x]

f[]

[f]

[]f
[]f[]

[3 f]


f[x] 3
f[x]/

f/
f[]/
f./
f.[x]

![+]%

+[%]


[+]%

[-]1

[+%]

+.%

+/[%]#

+/ % #
+/%#





20151003T2055 Rehashing Notes on ^ and Factorial Powers Example

I’ve been spending a lot of time recently working on N notation for primitive 
recursion beginning with the following “natural” definition:

(f^0 n) ~ n
(f^(S m) n) ~ f^m f n

This conforms to classical iterative notation for repeated application of a 
function in a fixed point manner.

The extension of this operation is where I believe primitive recursive 
definitions can be introduced simply and with a sort of fitting surprise, I 
just have yet to find the best or most surprising form.

In N, J, and k the following notation is frequently used without concern for 
the space needed to perform the computation:

+/ f !4
+/ f 0 1 2 3
+/ (f 0), (f 1), (f 2), f 3
(f 0) + (f 1) + (f 2) + (f 3)

This is the notation that will eventually replace classical summation notation 
(and product notation etc.).
But, as you can see it first stores all the values of f at each numeral 0 1 2 3 
before summing it.
Hence the actual utility of the expression +/f (which you could read as “sum 
over f of”) is best used on pre-existing data.

I think that ^ can be used, in a natural way, to provide the same facilities 
without taking up any space other than what is needed to compute each step of 
the sum:

(+^f 1) ~ f 0
(+^f S n) ~ (+^f n) + f n

So that

(+^f 4)
(+^f 3) + f 3
(+^f 2) + (f 2) + f 3   (this being when you could compute the sum of f.2 and 
f.3 and simply store the result and continue the iterate calculation)
(+^f 1) + (f 1) + (f 2) + (f 3)
(f 0) + (f 1) + (f 2) + (f 3)

Thus the factorial powers “x to the k falling” and “x to the k rising” (which 
occur frequently in CS and numerical math) would be written as:

x *^- k
x *^+ k

respectively if the following conventions are followed for a pair of dyadic 
verbs (binary functions) f and g:

(m f^g 1) ~ m g 0
(m f^g S n) ~ (m f^g n) f m g n

For example

x*^-3
(x*^-2)* x - 2
(x*^-1)* (x - 1) * x - 2
x * (x - 1) * (x - 2)

There are still some “notational kinks to work out” but this method of 
combining binary operations via the classical “power of” operation has been a 
long time coming.

One reason for abandoning the use of ^ as “exponent” is because the existence 
of roots of real numbers is, I believe, a highly suspicious belief to hold with 
any certainty for, as far as I know, it is still unknown whether or not there 
is a primitive recursive real number which cannot have a primitive recursive 
expansion. In other words, when it comes to sitting down and calculating roots 
via rational root approximations we have not yet established with certainty 
whether there might be a primitive recursive real number (i.e. a “real number” 
whose rational approximations are calculable with pen and paper) which does not 
have a primitive recursive expansion (i.e. pick a scale of measurement, like we 
do with decimals, and try to write out successive decimal approximations to 
that real number). This startling fact is one reason to consider using ^ for 
composing arithmetic operations rather than as an arithmetic operation in 
itself.

To contradict my last statement I do allow for numeral arguments to both sides 
of ^ and expect they should abbreviate the following:

 2^3
2 2 2

 2^(4 4)
2 2 2 2
2 2 2 2
2 2 2 2
2 2 2 2

 5^(3 3 3)
5 5 5
5 5 5
5 5 5

5 5 5
5 5 5
5 5 5

5 5 5
5 5 5
5 5 5

which is a 3 by 3 by 3 “brick” (as it is referred to in J) all of whose atoms 
are 5.
Functions to rectangular arrays behave as follows:

 f^(3 3) n
(f^3 n) , f^3 n

 f^(3 3; 3 3) n
(f^3 n), (f^3 n);
(f^3 n), (f^3 n)

The reason for these conventions comes from the analysis of a multidimensional 
array via its frames, cells, items, and atoms.

A practical/theoretical reason for me to obsess over these silly notational 
things is to be found in Hardy, Littlewood, and Polya’s Inequalities. If you 
flip towards later chapters, or even earlier chapters, you find that they are 
hardly doing tensor analysis and yet primitive inequalities are thwarted by 
their own notation.





20151002T1416 Less Than, Greater Than and N

(remember - means monus in N not minus e.g. 0 ~ 1-5)

In the current version of N notation the signs > and < are the classical 
operations of max and min respectively e.g.

 3>5
5
 3<5
3
 300>432
432

This is in stark contrast to the classical interpretation of > as greater 
than and < as less than.
In N, for any numerals x and y we have

(x > y) ~ x + y - x
(x > y) ~ y + x - x
(x < y) ~ x - x - y
(x < y) ~ y - y - x

From which it follows that

(x > y) ~ y > x
(x < y) ~ y < x

These being STATEMENTS ABOUT numerals rather than an abbreviation for A numeral.
One could use these statements to say "max and min are commutative" but one 
would NOT say "max and min are reflexive" for reflexivity is a property of 
relations and, in N, > and < are not relations.
(now, yes, you can construct a relation or interpret them as relations, but if 
you fit these operations into Goodstein's Equation Calculus then you will see 
that they are clearly just abbreviations for numerals and not "relations")

Given a pair of numerals x and y to say "x is greater than or equal to y" we 
could write:

x ~ x + y - x
x ~ x > y      N.B. "x is the same as the max of x and y"

It is not yet clear to me if allowing ~ to be defined in different ways by the 
user is a good design idea or a bad one, but I've been toying with the idea of 
giving people the power to redefine ~ as they see fit so that they can project 
statements about arithmetic onto arithmetic as they see fit.

In N's current system it is easiest to say that a statement (x ~ y) is "True" 
when the sgn of their positive difference is 0 and "False" when the sgn of 
their positive difference is  1.
An alternate way of looking at this, using just positive difference, is to say 
that a statement (x ~ y) is true if the positive difference of x and y is 0 
otherwise, we are given shades of Falseness depending on how big the positive 
difference is or how far it is from 0.
So for now we might think of ~ as an abbreviation for *= which maps statements 
about arithmetic into arithmetic (that ~ is the only symbol required to build a 
complete formalization of all elementary number theory via primitive recursive 
functions is the whole point of Goodstein's equation calculus where he uses = 
for ~ and |x,y| for the positive difference, I've just done the proper thing 
and simplified his concepts using hindsight).
 
~ : *=  (read as "same is signum positive difference" or "same means the claw 
of signum with positive difference" or "same abbreviates signum positive 
difference" or some as yet documented interpretation)

Thus ~ is an abbreviation for the claw *= which is sgn pd so that
 0 *= 0
0
 0 ~ 0
0
 12 *= 6
1 
 12 ~ 0
1

Under this interpretation of ~ we can define the relations leq (less than or 
equal to) and  geq (greater than or equal to) to give numerals as well

leq:{x ~ x - x - y}
leq:{x ~ x < y}
leq:{x}~<           N.B. This last definition is a fork of the left 
identity, same, and max

 3 leq 4
0

(
In J the left identity is given by [ and the right identity is given by ] these 
are the simplest of the immensely useful identity/projection functions 
introduced and used with great effect by Gödel and Kleene in their work on 
mu-recursive functions.
I'm not sure if I should use them in N for the same reason, or if they should 
be reserved for the now familiar index notation so often found in modern 
programming languages.
In this case the definition leq:[~< would be shorter and read "leq 
abbreviates left identity same min" which could be expanded from Pidgin English 
to "leq is an abbreviation for the left identity being the same as the minimum".
One can actually define left and right identity from plus and monus so that

LI:{(x + y) - y}
RI:{(x + y) - x}

This suggests that one should consider the primitive patters of all expressions 
having the form

u f v g w
(u f v) g w

where u,v,w are one of the numerals x or y and f,g are one of the verbs + or - 
(plus or monus).
)





20151002T1227 Why use ^ rather than / ? Or what relationships are there between 
^ and / with !  ?

If space is not a concern, and if you are inclined to view things in a 
classical way then the following way of calculating is more appropriate:

+/!4
+/ 0 1 2 3
0 + 1 + 2 + 3
0 + 1 + 5
0 + 6
6

Where as, with at least one of the definitions for ^ redo given below, you 
could write this as:

0 +^+ 4
(0 +^+ 3) + 0 + 3
(0 +^+ 3) + 3
(0 +^+ 2) + 0 + 2 + 3
(0 +^+ 2) + 0 + 5
(0 +^+ 2) + 5
(0 +^+ 1) + 0 + 1 +5
(0 +^+ 1) + 0 + 6
(0 +^+ 1) + 6
(0 +^+ 0) + 0 + 0 + 6
(0 +^+ 0) + 0 + 6
(0 +^+ 0) + 6
0 + 0 + 6
0 + 6
6

The previous sequence of events constitutes a calculation using one of the 
definitions for ^ given below.
It is not the most efficient definition, and clearly this execution shows how 
one should and could improve the definition.
It is a good example of how one might hope to relate ^ and / as follows

(+/f!n) ~ +^f n

In other words, if you you wanted to add the values of f from 0 to n-1 then you 
could do it in one of two ways:

The +/f!n way:
[1] list the arguments you want to give to f using (!n)~ 0,1,2,3,4,..,n-1
[2] Calculate f of !n i.e.
f!n
f 0,1,2,..,n-1
(f 0),(f 1),(f 2)..,f n-1
[3] sum over these values:
+/f!n
+/f 0,1,2,..,n-1
+/(f 0),(f 1),(f 2),..,f n-1
(f 0) + (f 1) + (f 2) + .. + f n-1

The +^f n way:
[1] Apply definition of verb +^f to n
+^f n
(+^f n-1) + f n-1
(+^f n-2) + (f n-2) + f n-1
..
(+^f 0) + (f 1) + (f 2) + .. + (f n-2) + f n-1
(f 0) + (f 1) + (f 2) + .. + (f n-2) + f n-1

Notice that if you were actually calculating with +^f instead of +/f you can 
compute each value of f as it is needed and not before.
When using +/f!n you must generate enough space to keep all the values, so that 
at any step in the calculation you generate all the data you will add all the 
way up until the end.
I think the similarities are undeniably important.
Specifically, the case when you would use the form +/f is if you have some data 
in an array A and you want to sum its transformation through f i.e. +/f A.
If you do not already have an array of data A and you have to generate it for 
the purpose of making a current computation then you can do it in place using 
+^f n although it is likely that you will have to make a transformation from n 
via a helper function g before f so that the final form might look more like 
+^f.g n so that at each step you get 

+^f.g n
(+^f.g n-1) + f.g n-1
(+^f.g n-2) + (f.g n-2) + f.g n-1
and so on.

It doesn't make much of a difference on small data sets (as is often the case 
with most operations), but the difference on huge data sets is profoundly 
different.
For one the form +^f needs only as much space as is needed to accumulate the 
sum + at each step, where as +/f goes over a potentially huge set of data in 
order to summarize it by a single numeral.

It's important to recall that the "classical" motivation for using ^ in this 
way is from iteration of a monadic (unary) verb (function):

%^2 300     N.B. % means "integer square root" and %^2 "integer square root 
repeat two"
%^1 % 300
%^1 17
%^0 % 17
%^0 4
{y} 4   N.B. {y} means "the right argument"; projection of right argument; 
identity
4

Many people are most familiar with using ^ to indicate the exponential 
function, but in N if you were to use ^ with two numeral arguments you might be 
surprised (or not):

 2^3
2 2 2
 5^6
5 5 5 5 5
 5^(3 3)
5 5 5
5 5 5
5 5 5

It just repeats (or copies) the left argument by the right argument.
This gives an analogy making sense with functions:

 f^3 x
f f f x
 f^5 x
f f f f f x
 f^(3 3)
(f^3 x) , f^3 x
 f^(3 3; 3 3) x
(f^3 x),(f^3 x);
(f^3 x),(f^3 x)

Though the last few showcase that behavior on functions is a bit different when 
the right argument is a rectangular array of numerals.
Interestingly, or perhaps not, the following calculate the same value using 
these notational conventions:

 f^(3 3; 3 3) x
(f^3 x),(f^3 x);
(f^3 x),(f^3 x)

 f^3 (x,x);x,x

The adverb rank " should be used in order to deal with the application of f^3 
on different frames of its argument.
There is still a lot of work for me to do on organized array actions.
There are a lot of conventions, and so far I am most pleased by Iverson's use 
of cells and frames in J via the rank " adverb (operator).
The only way to really get frames, cells, atoms, and items of rectangular 
arrays (and eventually trees) to work "right" is to see how they're used in 
Applied Analysis.
Specifically, the use of rectilinear arrays as tensors in simulations and 
approximations of physical phenomenon.

As I said earlier, the use of ^ is most often associated with exponentiation.
In general, exponentiation is actually a very different operation from its more 
familiar "power of" denotation.
Specifically, people think of 2^4 as "two to the fourth power" or "two to the 
power of four" and from this definition "to the power of" is repeated 
multiplication of the left argument by the right argument number of times.
So in classical notation one would calculate 2^4 as follows:

2^4
2 * 2 * 2 * 2
2 * 2 * 4
2 * 8
16

With N's current notation for ^ , power is calculated as follows:

(2*)^4 1
2* 2* 2* 2* 1
2* 2* 2* 2
2* 2* 4
2* 8
16

So that the thing which is repeated four times is the act of multiplying by 2.
The verb (function) "multiply by two" is written in N as 2* and for now the 
parenthesis around it in (2*)^4 1 are needed so that ^ can distinguish it from 
2 *^4 1 which currently has no clear meaning in N's notation.

People might find it clumsy to use (2*)^4 1 instead of the classical 2^4 which 
seems much simpler.
It is easy to to define power using E as a binary verb:

 E:{(x*)^y 1}
 2 E 4
16

Why not just use the classical notation?
The reason is not only do you gain the ability to use ^ in all those settings 
where you are "repeating" any sorts of operations on any sorts of arguments 
(something which cuts to the heart of arithmetic and math in general), but, in 
general, exponentiation is hard outside of elementary number theory.

If you open a book on real analysis you'll find the first chapter (or maybe 
more) dedicated to a characterization of the set of real numbers.
For example, in Rudin's well known Principles of Mathematical Analysis, the 
real numbers are introduced as an ordered field having the least upper bound 
property and containing the rational numbers.
Furthermore we are told (it is proven) that any other ordered field with the 
least upper bound property and containing the rational numbers is isomorphic 
(read "the same as") with the real numbers.
The real numbers are "constructed" by Rudin using Dedekind Cuts (in actuality 
he uses Russell's refinement of Dedekind's original construction: Russell 
noticed that you don't need to keep the left and right side of a cut, you only 
need one side and that's enough for the same exact argument to go through).
The construction proceeds from the set of rational numbers by first collecting 
all those subsets of the rationals that are Dedekind Cuts (they have certain 
properties that are obviously similar to those you are looking for in an 
ordered field with the least upper bound property).
He then proves that the arithmetic of rational numbers can be extended to these 
cuts and completed to satisfy the requirements of an ordered field having the 
least upper bound property.

From the characterizing properties of the real numbers as an ordered field with 
the least upper bound property that contains the rational numbers it is proven 
that it is possible to defining an operation of "n-th root" of every real 
number.
The proof of existence of a number satisfying the property of an n-th root is 
given by first constructing a bounded set and showing that its least upper 
bound is the unique number satisfying the property of an n-th root.

My contention is that this construction is highly misleading, and hardly 
"gives" the real number that is the n-th root.
In fact, whenever we take the n-th root of a number we're always doing it with 
a rational number because we can not write out by hand anything but a reference 
to most "real numbers".
Said another way, we can claim that x is a real number, which means it might be 
defined via any number of logical statements, but we may have little to no 
knowledge of how to approximate or "realize" x as some kind of abbreviated 
quantity.

Alternatively, one can proceed down Goodstein's Recursive Analysis.
There is also Bishop's Constructive Analysis, but I have yet to read it again 
after having entered on such a detailed study of Goodstein's works.

Back to the exponentiation: in N and in Goodstein's system, if you want to give 
the n-th root of a number (specifically of a rational number), then it must be 
given via a primitive recursive function, something which is easily calculated 
and easily computed.
The need for ease of calculation aids in any proof that the procedure gives the 
desired quantity, but also lends to insight into managing the calculation of 
sometimes hard to get numerals.

The use of sets to "build" a number is idealistically valid, but the practical 
construction of numerals is less a matter of collections and more a matter of 
proper abbreviations and algorithms for their efficient manipulation.
This is true EVEN in the case of using sets and not just numerals and function 
signs for operations on numerals.

In the future, it will hopefully be easier to see that one can recreate the 
real numbers and their various operations, by crossing the proper ordinal 
barrier via an argument by transfinite induction.
This is likely to have the benefit of making certain analytic arguments (in the 
classical sense) more the product of primitive recursive analysis.

It's at this point that I must remark: this document is called "collect" for a 
reason.
It is nothing more than a collection of raw and unprocessed thoughts.
It is a way of thinking out loud in an attempt to discover what it is that I 
have to say about different topics with the intent of giving focus to my future 
thoughts on the investigated topics.





20151001T1725 Conditional Iteration

(f$g y) gives y if (0 ~ g y) else f$g f y

If I adopt this notation then it would be read "f if g of y"





20151001T1619 Collecting Definitions for ^ redo possible expansion to $ because 
recursion is really important

Over the past few days, and perhaps throughout the past year, I've come up with 
a number of different way to think about using notation for recursion, some 
much better than others.
Right now I'm just focusing on my work with N and what I'm currently calling 
the Redo operation ^.
Here are the definitions collected into a single place.
The definitions here assume that f and g denote dyadic verbs (binary operations 
or binary functions (but not really in N since everything is actually an 
abbreviation for a numeral or an abbreviation for a verb):

Definition A
(0 f^g y) ~ y
(x f^g y) ~ (x g y) f^g x f y

Definition B
  (0 f^g y) ~ 0 g y
(S.x f^g y) ~ (x f^g y) g x f y

Definition C
  (x f^g 0) ~ x g 0
(x f^g S y) ~ (x f^g y) g x f y

Definition D
(x f^g y) gives y if (0 ~ x g y) else x f^g f y
(f^g y) gives y if (0 ~ g y) else f^g f y

Definition E
(x f^g y) gives y if (0 ~ x) else (x g y) f^g f y





20151001T0811 Finally Folding Long Lines
Up until today the lines of this website ran all the way to their end.
Now they don't because I added fold -s to the shell command I use to transform 
my text files into html files.
It doesn't produce the prettiest output yet, but it's a step in the right 
direction.
As I've said before I can not justify putting time into making things pretty 
when there aren't enough things to know what pretty might mean with regards to 
them in the first place.





20150930T1422 another possible definition of redo ^
The definition for ^ given yesterday were as follows:

S successor

f:unary
n:numeral
y:numeral
f^0 y gives y
f^(S n) y gives f^n f y

f:binary
g:binary
x:numeral
y:numeral
0 f^g y gives y
x f^g y gives (x g y) f^g (x f y)

Today I present an alternate form of evaluation in an attempt to make the 
closets connection to the classical form of recursion that most are familiar 
with.
Anytime I say recursion I mean primitive recursion unless otherwise noted for 
general recursion allows methods of calculation which rely on logical proofs in 
certain logical systems where as my goal is to follow Goodstein by sticking to 
arithmetic first and deriving logical operations as a consequence of arithmetic.
Due to the nature of recursion as revealed by Peter in her Recursive Functions, 
many forms of recursive definition are reducible to primitive recursion in one 
argument without parameters.
For example, course of values recursion is eliminable to primitive recursion in 
a single variable without parameters via the fundamental theorem of arithmetic.
Finding the correct form of definition for f^g means giving one which hints, as 
much as possible, to practical forms of these powerful reductions.

The form of definition today is derived more directly from the definition by 
recursion as given by Goodstein on pg.19 of RNT:

"
 F(x,0)=a(x)
F(x,Sy)=b(x,y,F(x,y))
" Goodstein RNT pg. 19

I've given his scheme for definition by recursion in his notation.
In N one would write these expressions as:

  (0 F y) ~ a y
(S.x F y) ~ (x F y) b x,y

or

    0.F.y ~ a.y
(S.x).F.y ~ x.F.y b x,y

I prefer the former to the latter for now.
First, the reason the recursion in N is written so that it is a recursion in x 
rather than in Goodstein's definition where it is recursion in y.
In N, the convention is that x is used most often to refer to the left argument 
to a dyadic verb, where as y usually refers to the right argument of a dyadic 
or monadic verb.
A further convention which has far reaching implications for the way in which 
we think about recursively defined verbs is the use of the left argument as the 
argument in which the recursion is in.
The reason is because in order to better express the functional relations 
needed to successfully retain analogs to classical analytic results when using 
only primitive recursive rational functions it becomes paramount to imagine a 
family of functions index by a natural numeral.
In N an expression such as 3.f or (3 f) gives a projection of a dyadic (binary) 
function (verb) f whose left argument is 3.
For example

 f:+     f is plus
 3.f     3 of f
3+       3 plus
 g:3.f   g is 3.f
 g 4
7
 3+ 4
7

Many functions in primitive recursive analysis are used via similar forms in 
majorizing arguments which play the role of classical limit arguments.
The relatively exponential function E is defined recursively from the power 
function (pow) and the factorial function (fac) so that

n numeral
y rational

  (0 E y) ~ 1
(S.n E y) ~ (n E y) + (y pow S.n) % fac S.n

So that for a given problem, one may need only 3.E or perhaps 5.E or n.E for 
some n to carry through an argument or computation.
The advantage being not only all the machinery of primitive recursive analysis, 
but immediate computability for experiment or application.

Upon reviewing the definition I gave yesterday I realized its symmetry was not 
properly reflective of the form of recursive definition more commonly used.

An alternate way of interpreting f^g for a pair of dyadic (binary) verbs f and 
g is as follows

  (0 f^g y) ~ y
(S.x f^g y) ~ (x f^g y) f x g y

As is easily seen, this more closely reflects the statement of Goodstein's 
schema for definition by primitive recursion given above (and reproduced her 
for immediate comparison:

      0.F.y ~ a.y
  (S.x).F.y ~ x.F.y b x,y

  (0 f^g y) ~ y
(S.x f^g y) ~ (x f^g y) f x g y

Or, perhaps to better complete the established pattern to the base case:

  (0 f^g y) ~ 0 g y
(S.x f^g y) ~ (x f^g y) g x f y

This suggests that the unary form of f^g be defined via the following 
substitution into its binary counterpart:

(f^g y) ~ y f^g y

So that the factorial function is defined as:

fac: *^{1+x}
fac 3
*^{1+x} 3
3 *^{1+x} 3
(2 *^{1+x} 3) * 2{1+x}3
(2 *^{1+x} 3) * 1+2
(2 *^{1+x} 3) * 3
(1 *^{1+x} 3) * (1{1+x}3) * 3
(1 *^{1+x} 3) * (1+1) * 3
(1 *^{1+x} 3) * 2 * 3
(0 *^{1+x} 3) * (0{1+x}3) * 2 * 3
(0 *^{1+x} 3) * (1+0) * 2 * 3
(0 *^{1+x} 3) * 1 * 2 * 3
(0{1+x}3) * 1 * 2 * 3
(1+0) * 1 * 2 * 3
1 * 1 * 2 * 3
1 * 1 * 6
1 * 6
6





20150929T1522 redo ^
First, ^ is referred to as “redo” or “repeat” it is the entry point for 
“recursion” into N notation.
For a numeral n and unary function f

f^0 y gives y
f^(S n) y gives  f^n f y

In other words, it’s just iteration (where S is the successor operation).
Now here’s where things get fun, and perhaps profoundly interesting.
The generalization of the operator ^ for two binary functions f and g is as 
follows

0 f^g y gives y
x f^g y gives (x g y) f^g (x f y)

Thus if P is the predecessor operation then

(S n) f.{y}^P.{x} y gives n f.{y}^P.{x} f y (after computation where {y} is 
projection of the right argument and {x} is projection of the left argument).

Which is the same as simple iteration in the form (f^n y)

If f and g are both unary we can make the agreement that

0 f^g y gives y
x f^g y gives g.x f^g f.y

Which lets us write

n S^P.P 0

for the floor of half of n (or the integer half of n).

A function whose recursive definition is classically given via a helper 
function:

alt 0 gives 0
alt S n gives 1 - alt n  (here - is monus i.e. m - 0 gives m and m - S n gives 
P m-n)

so that alt 0 gives 0, alt 1 gives 1, alt 2 gives 0 and so on.

then

hf 0 gives 0
hf S n gives (hf n) + alt n

where hf is the the integer half function.

Here is probably a more “likable” definition

n *^P 1

is the factorial of n (where P is the predecessor and  * is normal 
multiplication).
The following sequence of events showcases how you might imagine working with 
this notation:

4 *^P 1
P.4 *^P 4*1
3 *^P 4
P.3 *^P 3*4
2 *^P 3*4
P.2 *^P 2*3*4
1 *^P 2*3*4
P.1 *^P 1*2*3*4
0 *^P 1*2*3*4
1*2*3*4
24

Similarly

n +^P 0

is the sum of the numbers from 1 to n.
The importance is that these are recursive definitions.
One can use a different notation, the unary numer function !, to achieve the 
same things

!2 gives (0,1)
!3 gives (0,1,2)
!10 gives (0,1,2,3,4,5,6,7,8,9) 
and so on

so that the factorial of n can be given as

*/1+!n

which is times over numer n.

*/1+!3
*/1+(0,1,2)
*/(1,2,3)
1*2*3
6

Which is a more “explicit” method of calculating with factorials (this list the 
way factorial is usually given in text books when people write (fact n) = 1 * 2 
* 3 * 4 * … *n

(x*y) is the same as x+^y 0
(x exp y) is the same as x*^y 1





20150929T1251 Refining the primitive concepts of N
The greatest contribution to my development of N has been identifying the 
dyadic adverb ^ as redo|again|repeat.
I've settled, for now, on redo because it is appropriately vague on how you 
will redo the next step, perhaps with some minor edits or changes (which is 
what happens in most places where it might be used).
Though, in a barely second place to ^ as redo, are the following 
definitions/identifications:

> max
= pd
< min
& gcd|meet
| lcm|join
~ same

Though, as will hopefully become clearer as I develop N and my presentation of 
its utility, the identification of append|affix|concat which is denoted , as a 
binary operation (dyad) should have the greatest impact on how mathematicians 
view classical expressions such as f(x,y,z) (mostly because this notation 
maintains its classical meaning in N's new interpretation).

I'll give a brief explanation of what = means (it's not that complicated 
really).

 2=1
1
 5=9
4
 9=5
4
 105=99
6

The dyadic verb = is called "positive difference" and is the most natural 
"norm" on the natural numbers.
My contention is that the sign = should no longer be used to denoted equality 
because that concept is poorly defined in most cases.
Specifically, = should be included among the arithmetical signs and not among 
logical signs.
That is where ~ comes in, as same, it is much more appropriate, in general, for 
expressing the similarities that we are actually used to seeing in algebraic 
expressions.

Furthermore, there is a general outlook on arithmetic which is very clearly and 
exactly described in Goodstein's Recursive Number Theory that supports the use 
of = as positive difference.
Specifically, it is important to prove or "know" that if the positive 
difference between one numeral and another is zero then they must be the same 
numeral.
This has far reaching implications in how we conceive of sameness inside a 
theory and outside a theory.
In other words, it is important to know where arithmetic ends and where our 
statements about arithmetic begin.
Though it might sound like a theoretical issue only, it is not.
Ask any computer scientist and you will see that the notion of equality is 
usually a matter of "relevant taste".
I've decided to subsume equality as a concept under the moniker "same" for now, 
though it might stick as I go along.

In the other notation of N the positive difference satisfies the following 
identity:

{x=y} ~ {(x-y)+y-x}

Though this is given in the curly bracket notation (something which I'm still 
trying to decide whether it helps thinking or if its just a redundant crutch).

To put into symbols the statement that two numerals whose positive difference 
is zero are the same we write:

(0 ~ x = y) ~ x ~ y

To some people, this might be profound (because it is) and to others it might 
seem silly because of how simple it is (because it is).

Why make < and > return min and max respectively?
There are at least two arguments: one is "practical" the other is 
theoretical|design based.
The practical argument is that deciding whether one numeral is greater than or 
less than another is a statement about numerals, where as giving the the 
maximum of a pair of numerals is an arithmetic action.
Another practical observation is that no matter where you might be 
"incrementing or decrementing" (something that you shouldn't be doing with N in 
the first place, but that's something completely different to write about) you 
can use the this claw *= to do what you've really been wanting.
For example

 1 *= 3
1
 2 *= 3
1
 3 *= 3
0
 5 *= 1
1
 4 *= 2
1
 3 *= 3
0

There is also the situation that frequently someone wanting to "compare" 
numbers are doing so because they desire to find the minimum or maximum of them.
It's different, but it's not different to be different.
It's different because it's something that cuts closer to the "heart" of 
arithmetic and its relation to statements about arithmetic.
This being the theory argument (the details of the theory are easily read in 
Goodstein RNT, but because people are afraid of thinking I've not gone into 
detail here so as not to scare them off).
Suppose you wish to decide whether one number is greater than another.
Supposing you believe 0 to represent "True" and 1 to represent "False" you 
could write

less:{x=x<y}
 3 less 4
0
 4 less 3
1


Suppose you believe 0 to represent "False" and 1 to represent "True" you could 
write

less:{1-x=x<y}
 3 less 4
1
 4 less 3
0

"Why do this!" you ask!!?
Because, orderings and decision is actually hard in general.
It is highly dependent on representation.
Under certain conditions you may find it necessary to redefine what it means 
for things to be less than or greater than: even when it comes to something so 
primitive as the natural numbers!
An example, but perhaps not the most accessible example, is the modular 
representation of integers described by Knuth in TAOCP Vol 2 Semi-Numerical 
Algorithms.
"But! Giving the max requires you to make a decision as to which is greater!"
No, it doesn't: consult the identity that follows.

(x > y) ~ x+y-x
(x > y) ~ y+x-y

Remember, in N, the dyad - is monus, not minus.
Minus is actually a powerful abstraction, where as monus is a familiar activity 
(because you can't take an apple from an empty bag and produce a negative 
apple).
The behavior of - on integer numerals is the operation of minus that everyone 
is happy being familiar with (for now).

The theoretical reason to considering = < > as pd, min, max is because 
these arithmetic operations are prior to the relations of equality, less than, 
and greater than.
It is a subtle but essential distinction, one which is bound to frustrate and 
baffle those who are not willing to entertain such a fundamental change in 
perspective.

There is an even deeper reason for defining things this way.
Propositions about arithmetic and arithmetic should be separated as much as 
possible so that it is easier to interpret propositions about arithmetic in 
arithmetic in ways that we have yet to imagine.
For example, there are different ways we can think of "ordering" the natural 
numbers.
Perhaps we want to imagine all the even numbers coming before the odd numbers!
Our imagination in this regard could change on a whim, but by agreeing on 
x<y as an abbreviation for x+y-x or y+x-y we are focusing on how the 
fundamental arithmetic operations of plus and monus combine with each other.
In other words, whatever it is that we might wish from an arithmetic, we seem 
to be bound to introduce the concept of plus and monus from which the expression

(x+y-x) ~ y+x-y

is satisfied by any numerals x and y.
The fact that I have called < min and > max already "plays favorites" to 
the classical ordering of the natural numerals, but this is a design feature so 
that users are not immediately confused.

Another way of thinking about these thing is to see that the classical notion 
that a numeral x is less than or equal to a numeral y is encapsulated in the 
following proposition:

x ~ y-y-x

Again, this might seem silly, but, much like the makers of the C programming 
language said, it wears well as one uses it.
It opens doors from the beginning without overwhelming the user of mathematics 
with unnecessary or overwhelming choices.
One might think of it as a more "neutral" way of doing arithmetic.
A erudite would call this perspective "Post-Gödelian Arithmetic".
The reason?
All of these seemingly annoying redefining of age old concepts bare out all the 
way to the limits of arithmetic.
Specifically, to the point where arithmetic can be used to say that "There is 
an equation 0 ~ f n which is verifiable but not provable in our primitive 
arithmetic".
This seemingly abstract fact is actually considerably concrete, and the 
arithmetic conventions of N take this concrete fact into account from the 
beginning.
This fact follows form a specific interpretation of propositions about 
arithmetic as parts of arithmetic (Gödel Numerals).





20150927T1517 Aren't You Worried about someone stealing your ideas?
Yes and no.
I am worried about people not giving credit where credit is due e.g. 
Goodstein's work is largely overlooked even though he, like Russell, Hilbert, 
and others, has produced a monumentally important contribution to the 
foundation of mathematics and mathematical philosophy.
If someone copies my ideas then at least there will be detailed and public 
records that my work has been done by me over an iterated period of time.





20150927T1512 Primitive Inequalities
Any inequalities from Hardy, Littlewood, and Pólya Inequalities which are 
provable or derivable inside Goodstein's Primitive Recursive Arithmetic are to 
be called Primitive Inequalities.
Though there is the suggestion that they should be called elementary, there are 
reasons for not favoring this phrase which clash with the use of elementary to 
describe a class of functions.
It is important to identify which inequalities from Hardy, Littlewood, and 
Pólya Inequalities are provable in Goodstein's system as they are the principle 
relations used to establish what are commonly accepted as essential analytic 
results.
Comparing where these inequalities are used in Goodstein's Recursive Analysis 
will reveal the different branches of analysis that are united and divided by 
Goodstein's methods.





20150927T1451 The Design of Distributed and Parallel Computing Systems
We lack a practical foundation for parallel and distributed computing systems.
This is not because we lack the knowledge needed to understand these topics, 
rather we lack the proper narrative to carry this knowledge to those who need 
it.
Those who need it is a subjective group, but, it is my contention, that 
elementary school students need such knowledge.
They don't need knowledge of parallel computing, but they need the notation to 
describe their naturally developing tabular thought.
Humans think recursively and act iteratively.
Iterative actions can be parallel or "distributed" (not exclusively).
The elementary school use of phrases like "do the same thing again" or "repeat 
that but now on this one" is just one of the ways in which we introduce 
parallelism (or perhaps just concurrency) into the minds of elementary school 
students.
Multidimensional arithmetic is just one access point for putting parallel 
concepts into the hands of those who need it most.





20150927T1440 Goodstein and Iverson and Peter
I've written much on how the works of Goodstein and Iverson just work together.
There are some hints, though nothing explicit, that Iverson was aware of 
Goodstein's work but did not care to mention it.
Specifically, the writings in Goodstein's "Fundamental Concepts of Mathematics" 
are such that Iverson would have been not only attracted to its content but 
also its conceptual perspective.
One thing which is striking is that Goodstein's FCOM was published in 1962 and 
Iverson's "A Programming Language" was published in that same year.
In Goodstein's FCOM he uses the phrase "pronumerals" which is also used by 
Iverson in APL.
I could not find a single mention in Iverson or Goodstein of the others work.
I continue to find statements of having "making computation a mathematical 
activity" in papers and publications without seeing any mention of the works of 
Peter or Goodstein.
Goodstein makes it clear, multiple times throughout his work, that his efforts 
are built entirely on those of Peter and one can even find the appropriate 
references to the Arithmetization of Logic given by Kleene in his Introduction 
to Metamathematics.
It is very frustrating to me to not see more mention of Peter's work on 
Recursive Functions in modern literature.
This is not only because she is a woman, something which people tend to 
overlook or forget to mention, but because before her work on recursive 
functions there was no single collection of what constitute recursive functions 
and how the differing forms of recursion and recursiveness are or are not 
reducible to forms of primitive recursion.

As much as modern computation is about recursion, Peter should be more 
frequently mentioned in modern texts because it is because of her that we have 
most of our fundamental results in the foundation of recursion and 
recursiveness.





20150927T1138 Goodstein Realized Leibniz' Calculus ratiocinator
I am surprised that more was not made of Goodstein's work at the time it was 
completed.
For those familiar with Leibniz's efforts to develop what he referred to as a 
"Calculus Ratiocinator" they will find in Goodstein's 'Recursive Number Theory' 
a complete exposition of the ideal desired by Leibniz.
Sadly, due to the popularity of Leibniz's work, there has been a great number 
of misconceptions as to what was or would become his final aim in creating a 
Calculus ratiocinator.
Some would describe his desired goal as an "algebra of rational thought" or as 
an anticipation of modern mathematical logic.
Leibniz's contributions to modern mathematical logic and philosophy are perhaps 
best encapsulated in Russell's work, though certainly there is no substitute 
for the primary sources.

To answer the question "In what way did Goodstein realize Leibniz's Calculus 
Ratiocinator?".
Goodstein's recursive number theory is a surprisingly simple formalization of 
primitive recursive function theory, but, more importantly, it is an intuitive 
foundation for all of number theory.
Furthermore, he is able to show how what passes as modern mathematical logic is 
developed entirely through the use of arithmetic operations.
It is important to contrast this with the works of Kleene and Gödel.
Kleene and Gödel both developed an arithmetization of logic as a tool for 
reducing their metamathematical arguments to the most "trustworthy" operations 
of arithmetic.
Goodstein began not by seeking out a logical machinery, but rather a self 
contained description of arithmetic as the art of primitive recursive 
reductions and abbreviations.

It is known that Leibniz attempted many times to develop his Calculus 
Ratiocinator using the elementary operations of arithmetic.
One hardly needs a primary source to imagine a man of his intellect trying to 
seek out patterns in arithmetic which matched or met his desired goals in 
research.
Prior to any of our modern mathematical logic was elementary arithmetic, and it 
is in elementary arithmetic that notions of system, structure, and irrefutable 
proof and truth developed.
A modern mathematician would say "Leibniz sought a model of mathematical logic 
in elementary arithmetic" though it took the works of Boole to give a clearer 
hint at where one might find an arithmetic or algebra of argument.

Leibniz would have recognized immediately that Goodstein's formalism of 
primitive recursive arithmetic is precisely the Calculus ratiocinator sought 
for the following reasons:
1 The only inference rules are rules of substitution which encapsulate his 
'identity of indiscernibles' and primitive recursive uniqueness rule which is a 
further application of identity of indiscernibles.
2 All arguments are made by eliminating notation until a numeral is reached.
3 Goodstein's development of mathematical logic in his primitive recursive 
arithmetic satisfies Leibniz's law of identity/contradiction.
4 In Goodstein's system only verifiable equations are provable which I believe 
Leibniz would have quickly interpreted as an interpretation of his law of 
Sufficient Reason.
5 Perhaps the most important thing, Goodstein has produce a clear, complete, 
simple, and general specification of the mechanical movements need by any 
notation which wishes to capture the most basic of mathematical acts: 
elementary arithmetic.

The reasons for believing Goodstein's work realizes Leibniz's ideal go on and 
on.
Anyone familiar with Leibniz should read Goodstein and see what they've missed 
all these years.
I can not overstate the importance of making this connection between the work 
of Leibniz and the work of Goodstein.
It may come as no surprise that my work on N has been inspired and motivated by 
a desire to craft these concepts in the most computable and calculable form 
possible.
Modern mathematics, while a tool of great power and generality, is still a 
poorly designed tool, one whose use requires an almost comical amount of 
expertise to wield wisely.
My purpose in making N and in my work is to refine the design of mathematics so 
that its use may give the widest impact on everyday life which it inevitably 
must have.





20150926T1631 Goodstein's Rt is a number theorists isqrt 
The integer square root of a number n is the largest number whose square is 
less than or equal to n.
It is possible to restrict this to just less than, and since I have not 
considered the boundary cases yet I will go with the convention established by 
number theorists.
The integer square root is fundamental to modern mathematical behavior.
It is also an essential part in the reduction proofs created by Peter in her 
Recursive Functions.
Its importance across all mathematics is not yet widely appreciated because its 
explicit use is often not mentioned.
It's importance is so significant that I have dedicated the unary symbol % of N 
to the isqrt operation.
From now on % will, in my mind, denote the integer square root of its argument 
in unary, and will denote the quotient of x divided by y in the binary case.





20150925T1520 Topics to Think About
Growing Ideas and Understanding with Git and GitHub
Emergence of logic from arithmetic rather than the other way around.
 Logic as a tool for thought is powerful but requires expert experience.
 Arithmetic as a tool for thought is available to all, and as a foundation for 
future understanding has yet to be fully utilized.





20150925T1455 Things I Should Think More About and Do
The design of this website is ugly.
This is primarily because I have put my time and effort into learning and 
understanding topics than I have into presenting what I've learned and 
understood about those topics.
There is a balance that one must strike in the work that they do.
That is a balance between doing work and adequately presenting the work that 
you have done.
When we say "presenting the work we've done" we're actually completing the 
final step of solving a problem: look back.
When we look back at what we've done we see how to change and alter its 
presentation so that whatever solution we've found might be seen at-a-glance or 
at least may be processed by our future self or others with the least amount of 
cognitive effort.

Another reason the design this website is ugly is because you can not design 
around nothing.
Before I spend time on a design I must identify relevant constraints on that 
design.
For this website the primary constraint is the content that I choose to put 
here.
Without clarifying the content I can not begin to clarify a design.
Another way of saying this is that if design has a goal then that goal must be 
identified before we begin the design process.
Here the goal is likely something like "Present arguments and information with 
crystal clarity."
This means that there should be as few moving parts to the mechanism of 
presentation as possible for the arguments and information given here already 
have an over abundance of moving parts.





20150925T1454 
The things people see and feel are guided by their thoughts and impressions for 
better or for worse.





20150924T1655 Dyad # take|project? Dyad , inject|affix|append
These verbs are obviously some of the most important in N regardless of what 
their ultimate implementation will be, the conceptual kernel of each must be so 
unavoidably necessary that one could not imagine living without them.
For # it's operation is projection i.e. it is what a mathematician would most 
likely call a pi-function (not the number pi, the function pi).
It's left argument is a list of indexes and its right is the the object to be 
projected.
Since N supports something like python's tuples, or C++'s homogenous vectors, 
it is easy to see that projection is what is happening if you take the items at 
the left arguments indices.
What's hard to know is if take is the correctly word for humans to use when 
referring to these general acts of projection, especially when dealign with 
tree like objects, dictionary|map like objects, or other objects.
I'm not saying that anything like dictionary|map is unavoidable as an 
object(noun), but that one must consider whether take is an appropriate verb 
for such things over project in case they are ultimately found to be 
unavoidable nouns for achieving N's goals.
x # y when x is a numeral and y is a numeral vector (table?) returns x items 
from y going back to the beginning of y if x exceeds the index of y i.e. the 
k-th item of x # y is the item of y whose index is the remainder of the length 
of y divided by k.
This style of index arithmetic is intended to eliminate references to items 
outside the index of a noun.
Some see this as a huge loss of opportunity to collect common index-errors i.e. 
rather than the whole show stopping just because an index is outside the length 
of a vector object the show just goes on without alerting the coder.
My response to a critic from this perspective is that you must handle your 
errors to know when they occur!
In other words: own your errors, don't blame the computer.
In other other words: you can't reference things outside a noun using clever 
index arithmetic, because the index arithmetic is wiser than you (don't go 
against math, work with math).
Also, a vector is not a list.
Which brings me to , as injection or affix.
injection is different in behavior for affix-ion
an injection usually makes things flat in the relevant way, an affix-ion can be 
seen as either a nesting (tuples of tuples to python people), or perhaps a type 
of injection at different locations.
The use of injection as a descriptor of the behavior of the dyad , is difficult 
to argue from a design perspective (it is not a friendly word to English 
speakers), but affix is also connotative of a potentially misleading 
perspective (that of nesting where no nesting has occurred).
Though the act of injection might not be easily suggested as it requires a left 
argument having a sort of permutation like structure.
For example,  (x;y),z might be interpreted as a command to put z at x in y so 
as to make a new flat vector i.e. to ammend y so as to make room for z.
There is also the potential of using integer notation to give a three command 
form to , so that x 0n2, y gives , an adverbial behavior.
It's not clear if this makes conceptual sense.
These subtle idioms of the language are to be developed as needed, and so far 
affix will probably do fine for the dyad , .





20150924T1602 Hardy, Littlewood, and Pólya Inequalities
I read this book a while back, and happened to be tying my shoes in my closet 
today when my gaze came upon it.
I realized immediately that all the work I've been doing to develop N could be 
put to good use, or to good test, by translating and interpreting the arguments 
and result in N and seeing where they fit inside the arguments given by 
Goodstein in RNT.
Most importantly, HLP consider their work to cover the elementary inequalities 
in use throughout real, complex, and functional analysis.
My perspective on real, complex, and functional analysis is significantly 
different from theirs and most modern mathematicians, so it should be 
interesting to see where I can fit their results in my head.
The good thing is that HLP are smarter and wiser than I am likely to ever be, 
and their results are given in their most fundamental form which is what is of 
direct interest to me and my notational language N.





20150924T1553 Quotes and Goodstein's Introduction of Zero in FCOM
The main thing to remember is to keep the main thing the main thing.
Time stamp everything, you never know when you need to know when.
Well done is better than well said.
If it's meant to be then it's up to me.

Goodstein waits until page 43 to introduce the number zero!
It's brilliant and genius.
He gives zero meaning as a number by introducing it with modular arithmetic, or 
what he more aptly calls 'arithmetic of remainders'.
This is a design feature of Goodstein's perspective and methods.
He doesn't just want to do as much as he can with as little as he can.
He wants to do it in a clean, clear, and vivid way.
A way that humans can follow and that seems to almost be self justifying at 
each step.
Goodstein's Fundamental Concepts of Mathematics is a must read for any math 
teacher at the middle or high school level.
Not because middle or high school teachers will cover the material in it (they 
should, but it's unlikely they would), but because of the vivid perspective it 
gives on the part math plays in everyday life.
It's the only book of its kind that I've ever read (and I've read A LOT of math 
books, like a whole lot, and when I say I've read them, I mean I really went 
through and read every page, something that a lot of people tend not to do, 
especially when the book is really popular).
It's also interesting to note that Goodstein uses the sign 0 for counting with 
an abacus using standard decimal notation, but there it is a place holder, and 
the digit by itself is not yet recognized as a numeral i.e. not clearly 
denoting a thing that we will perform arithmetical acts with.
The reason it makes sense to introduce zero with remainders is because you can 
think of remainder as what is left over after you've taken away as many 
collections containing the dividend number of objects from the collection being 
divided.
It's genius, because it means that zero can be interpreted as what's left over 
after everything has been taken away.
At the same time, by deferring its introduction we avoid a massive number of 
questions about how best to define, via recursion, the basic operations of 
arithmetic: addition, subtraction, multiplication, exponentiation, titration 
etc.
If you want a crystal clear description of the fundamental concepts of modern 
mathematics look no further than Goodstein's "Fundamental Concepts of 
Mathematics".



I want to build a list comprehension using the following code:
[[x,y] if 3!=x+y else [] for x in range(3) for y in range(4)]
but instead of inserting an empty list where the condition is not met, I want 
it to not do anything and just go on.

archive a message from Mail.app with control+command+A

Errors should be tracked across all areas of life.

Learn from your mistakes.
Don't find fault, find a remedy.

Download latest Python (3.5) pre packaged for mac
Invoke from Terminal with python3.5
The version of python invoked with 'python' is 2.7.10

Learning is an endless bootstrapping process.

http://brew.sh/
ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Homebrew/install/master/install)"
brew update
brew install gcc
man gcc-5

tree/node notation
a
|\
b c
|\
d c
|
e

a.c
a.b
a.b.c
a.b.d
a.b.d.e
note, a.c ~= a.b.c (unless they match)
trees locally, graphs globally


colophon
John Meuser
Inconsolata
Solarized
Git
GitHub
HTML
CSS
ed
sed
TextEdit

projects
goals
purpose
jsource
music

dictionaries
key-values
hash
Algorithms
Data Structures
Programming Languages
Artificial Intelligence
Databases
Security
Distributed Systems
Operating Systems
Networking
Topologies
Protocols
Applications
Network Congestion
Network Resilience
Popular Operating Systems
Mobile Operating Systems
Special Operating Systems
Components
Client-Server
Map-Reduce
ACID
CAP
Concurrency
Synchronization
Cryptography
Hashing
Information Security
Network Security
Secure Coding
Authentication
Relational Databases
NoSQL Databases
Object-oriented Databases
Database Design and Modeling
Transactions and concurrency
Administration
Storage
Database Security
Machine Learning
Natural Language Processing
Deep Learning
Search & Optimization
Reasoning
Classification
Statistical Learning
Game Theory
Popular Languages
Scripting Languages
Web Languages
Mobile Languages
Functional Languages
Esoteric Languages
Other Languages
Lists
Arrays
Trees
Hashes
Graphs
Sorting
Search
Recursion
Dynamic Programming
Greedy Algorithms
Strings
Graph Theory
Combinatorics
Number Theory
Bit Manipulation
Summations and Algebra
Probability
Geometry
Randomized Algorithms
NP Complete problems
Analytic number theory
Algebraic number theory
Probabilistic number theory
Enumerative combinatorics
Analytic combinatorics
Matroid theory
Probabilistic combinatorics
Algebraic combinatorics
Geometric combinatorics
Topological combinatorics
Arithmetic combinatorics
Combinatorial optimization
Discrete and computational geometry
Elementary Graph Algorithms
Minimum Spanning Trees
Single-Source Shortest Paths
All-Pairs Shortest Paths
Maximum Flow
Floyd-Warshall
Concatenation & Substrings
Prefixes & Suffixes
Rotations
Reversal
Ordering
Encoding
Representation
Parsing
Mining
Sequencing
Partitioning
Searching
Manipulation
Matching
Regular Expressionss
Pure Greedy
Orthogonal
Relaxed
Dijkstra's shortest path algorithm
Fibonacci sequence
Matrix Chain Multiplication
Longest Common Subsequence
Sequence alignment
Top-Down
Bottom-Up
Backtracking
Binary Search
Breadth First Search
Depth First Search
Combinatorial Search
Simple Sorts
Efficient Sorts
Bubble Sorts
Distribution Sorts
Basic Graph
Adjacency list
Adjacency matrix
Binary decision diagram
Directed graph
Directed acyclic graph
Multigraph
Hypergraph
Hash table
Hash list
Hash tree
Hash trie
Bloom filter
Distributed hash table
Double Hashing
Dynamic perfect hash table
Prefix hash tree
Space Partitioning Trees
Application Specific Trees
Binary Trees
B-Trees
Multiway Trees
Heaps
Tries
Bit array
Bit field
Bitmap
Dynamic array
Hashed array tree
Lookup table
Matrix
Parallel array
Sorted array
Sparse matrix
Variable length array
Linked list
Doubly linked list
Array list
Self-organizing list
Skip list
Doubly connected edge list
Difference list
Free list
VB .NET
Pascal
R
D
Groovy
Brainfuck
LOLCODE
WhiteSpace
Scala
Haskell
Clojure
Erlang
F#
OCaml
Racket
Common LISP
SWIFT
Objective C
Php
Javascript
HTML5
Perl
Lua
C
C++
Java
Python
Ruby
C#
Question answering
Sentiment Analysis
Speech Recognition
Text-to-Speech Conversion
Named entity recognition
Decision tree learning
Association rule learning
Artificial neural networks
Inductive logic programming
Support vector machines
Clustering
Bayesian networks
Reinforcement learning
Representation learning
Similarity and metric learning
Sparse dictionary learning
Genetic algorithms
Cassandra
HBase
MongoDB
DB2
PostgreSQL
Microsoft SQL Server
MySQL
Symmetric-key cryptography
Public-key cryptography
Cryptanalysis
Cryptographic primitives
Cryptosystems
Kernel
File Systems
Memory Management
Process Management
Distributed Operating Systems
Network Operating Systems
Object Oriented Operating Systems
Embedded Operating Systems
Android
iOS
Windows Phone
Linux
OSX
BSD
UNIX
Windows
IPTV
Videoconferencing
Online games
VoIP
Routing Protocols
Secure Protocols
TCP
IP
HTTP
POP
IMAP
FTP
Basic Trie
Radix tree
Suffix tree
Suffix array
B-trie
Binary heap
Weak heap
Binomial heap
Fibonacci heap
Ternary tree
Disjoint-set
Fusion tree
Fenwick tree
Basic B-tree
B+ tree
B* tree
2-3 tree
2-3-4 tree
Queap
Binary Search Tree
Self balancing Tree
Red-Black Trees
AVL tree
Splay tree
Weight-balanced tree
Abstract syntax tree
Parse tree
Decision tree
Minimax tree
Segment tree
R-tree
Counting sort
Bucket sort
Radix sort
Bubble sort
Shell sort
Comb sort
Mergesort
Heapsort
Quicksort
Insertion sort
Selection sort
Naive String Matching
Rabin-Karp
Finite Automata
Naive string search
Finite State Automaton
Index
Fuzzy Searches
Breadth First Search
Depth First Search
Dijkstra's Shortest Path

Copyright John Meuser 2015