The Koch Curve was invented/discovered in the early 20th-century by Helge von Koch and is a simple fractal.
Start with 1 segment, a straight line, and iterate a simple rule. The rule is:
At each time interval, take each segment, remove the middle 3rd and replace it with 2 segments that are 1/3 the length of the original segment and joined at the ends.
If we imagine an iterated koch curve as a coastline, we can ask, how long is it at each iteration? Each iteration is equivalent to using a smaller ruler.
Level - Seg Length - Num Segs - Curve Length
0 - L - 1 - L
1 - L/3 - 4 - (4/3)L
2 - L/9 - 16 - (16/9)L
3 - L/27 - 64 - (64/27)L
Here the "Curve Length" is the "coastline length".
So the curve at level n is: 4^n/3^n * L
The growth of our pattern means a 1 meter line segment at level 1 is ~2 billion miles by level 100. So we'd have 2 billion miles of line segments squeezed into a 1 meter long area.
Fractal-like structures in nature don't get to level 100, only mathematical fractals do, but it's a dramatic demonstration that the fractal geometry is space filling which is a very efficient use of space, and it's why we see fractal patterns in nature in places like the circulatory system, root systems and brains.
Liz Bradley - Computer Scientist
Modern computer systems are so complex they are non-linear dynamic systems so they can display chaos in chaotic performance of repeated runs of the same program, or chaotic use of memory. That is to say, sensitive dependence on initial conditions. The initial state of the computer (contents of all its registers and memory) results in chaotic performance or memory access patterns from repeated runs of the same program.
The "R" is the computer program that's being run so there is no way to change it smoothly or understand it simply like in the logistic map.
Legrangian coherent structures are structures with dynamically distinct regions of a time varying system, such as in groups of passengers moving through a transit system or clouds.
The morning glory clouds in Australia exhibit dramatic Legrangian coherent structures: Morning Glory Cloud
Legrangian coherent structures are interesting to complexity theory because they display emergence of these structures, and they represent an information loss in the system from an information theory perspective since it's possible to describe the structures rather than the individual elements forming the structure, and thereby describe the system with less information than would be otherwise required.
The world in logo is a coordinate system of patches. Each place in the world is a patch. You can control the size of the world.
ask patches [stuff]
stuff settings for patches:
set pcolor <color>
if <condition> [<..>]
if not any? patches with [<condition>] [stop]
ask turtles [ <stuff> ]
set <variable> <value>
set label <value>
To provide state to an agent:
turtles-own [variable]
Add a slider with the UI to let the user set a variable which can then be used by name in the code.
Add a plot with the UI to watch a variable: sum [food-eaten] of turtles
1.5x is definitely doable for this kind of lecture content when the audio quality is good and you are familiar with the speaker. Some of the experts videos are Skype calls, and the audio is worse and the speaker is new, so I'm having to do those at 1x speed. A 1.25x option would be nice.
Updated out of date dependencies.
Implemented all the failure cases, and tests for the failure cases, for improperly trying to uncategorize an item.
Did some dependency updates and noticed that lein ancient wasn't warning me of out of date plug-ins. Checked it's page and saw that's an option, and that I wasn't giving lein ancient permission to tell me about qualified releases. Updated the alias to support those things and updated a bunch of dependencies.
Re-tested on IE7 to see if the JS error is gone w/ latest cljs. It's not.
Commented out the phantom-js tests that work locally but not on Travis CI so the tests are passing again. Need to decide to replace those with real browsers from Sauce or with just mocked requests. The mocked requests could answer if things rendered on the page from the config file (part of what's being tested, but not if the cljs code is working to make the (admittedly very simple) interactions on the page work. The mock requests would work for the admin testing as well.
Lisp in Summer Projects is a content awarding prizes for demonstrating interesting and useful programs, technologies and art using any LISP-based technology.
It's supported by various NYC based LISP groups to encourage use of LISP in the classic "summer project" of the geek set.
It runs from June 24th - Sept. 30th 2013.
I'm participating this year with two different summer projects:
Falkland CMS - A Clojure/ClojureScript/CouchDB powered Curation Management System.
coming-soon - A Clojure/ClojureScript/Redis powered simple landing page email collector.
I'm +1 on being able to re-order cards.
Streams by this user that have been favorited by others.
No favorited streams yet.
The Koch Curve was invented/discovered in the early 20th-century by Helge von Koch and is a simple fractal.
Start with 1 segment, a straight line, and iterate a simple rule. The rule is:
At each time interval, take each segment, remove the middle 3rd and replace it with 2 segments that are 1/3 the length of the original segment and joined at the ends.
If we imagine an iterated koch curve as a coastline, we can ask, how long is it at each iteration? Each iteration is equivalent to using a smaller ruler.
Level - Seg Length - Num Segs - Curve Length
0 - L - 1 - L
1 - L/3 - 4 - (4/3)L
2 - L/9 - 16 - (16/9)L
3 - L/27 - 64 - (64/27)L
Here the "Curve Length" is the "coastline length".
So the curve at level n is: 4^n/3^n * L
The growth of our pattern means a 1 meter line segment at level 1 is ~2 billion miles by level 100. So we'd have 2 billion miles of line segments squeezed into a 1 meter long area.
Fractal-like structures in nature don't get to level 100, only mathematical fractals do, but it's a dramatic demonstration that the fractal geometry is space filling which is a very efficient use of space, and it's why we see fractal patterns in nature in places like the circulatory system, root systems and brains.
Benoit Mandelbrot coined the term fractal from a Latin root.
Mandelbrot was trying to develop a "theory of roughness" to describe the geometry of the real world. He created fractal geometry.
Mandelbrot looked at measuring coastlines on a map (famously Great Britain). With a shorter and shorter "ruler" (and a higher and higher resolution map) you get a longer and longer measurement of the coastline because the "rulers" can fit into even more nooks and crannies of the rough/rugged coastline.
So the question about how long the coastline is depends on the length of the ruler you use to measure it since it' has self-similar roughness/ruggedness at different scales and is not smooth.
Fractals are objects that are self similarity at different scales
A tree is fractal because it has a trunk with branches which have branches which have branches...
Fractal is self-similar at all possible scales and is a mathematical concept vs. fractal-like which is self-similarity at multiple scales and is what is found in nature.
Leaf veins, galaxy clusters, tree roots, mountain ranges, and web page links are all fractal-like.
Liz Bradley - Computer Scientist
Modern computer systems are so complex they are non-linear dynamic systems so they can display chaos in chaotic performance of repeated runs of the same program, or chaotic use of memory. That is to say, sensitive dependence on initial conditions. The initial state of the computer (contents of all its registers and memory) results in chaotic performance or memory access patterns from repeated runs of the same program.
The "R" is the computer program that's being run so there is no way to change it smoothly or understand it simply like in the logistic map.
Legrangian coherent structures are structures with dynamically distinct regions of a time varying system, such as in groups of passengers moving through a transit system or clouds.
The morning glory clouds in Australia exhibit dramatic Legrangian coherent structures: Morning Glory Cloud
Legrangian coherent structures are interesting to complexity theory because they display emergence of these structures, and they represent an information loss in the system from an information theory perspective since it's possible to describe the structures rather than the individual elements forming the structure, and thereby describe the system with less information than would be otherwise required.
Chaos is seemingly random behavior with sensitive dependence on initial conditions. The logistic map can display chaos for higher values of R (growthrates), which is called deterministic chaos which is chaos arising from a completely deterministic system or equation. Perfect prediction is impossible in deterministic chaos since initial conditions can never be exactly known.
While perfect prediction study of chaos has led to universality in chaos which are the highly predictable universal properties of a wide variety of chaotic systems.
All systems which display the period doubling root to chaos result in the unimodal or "one humped" parabolic graph of population to next generation's population.
One such system is the sine map, which is the pure mathematical function:
x_t+1 = R / 4 sin(pi * x_t)
Another universality is at what values of R the period bifurcations (doubling) occur. The bifurcations come quicker and quicker as R increases and approaches the onset of chaos. The rate at which the bifurcations is shrinking reaches a limit of ~4.6692016... which is known as Feigenbaum's constant which is a universal constant for chaotic systems with unimodal (one humped) graphs.
The Feigenbaum constant has been experimentally confirmed in chaotic systems such as fluid flow and electric circuits.
These universal properties are the order in the chaos.
A periodic attractor repeats itself oscillating between values with each generation. A periodic attractor with period 2 oscillates between 2 values.
The state of the system is which periodic attractor the system is currently at.
A periodic attractor with period 4 oscillates between 4 values. As you increase the growthrate, R, the period of the system doubles. At a high enough growth rate (such as 4) there is no longer a fixed or periodic attractor and the system is chaotic. It displays sensitive dependence on initial conditions.
At growthrate, R, of 4, just a tiny change in the initial population % will have almost no effect in initial generations but the systems will diverge and follow radically different chaotic paths in later generations.
"Prediction becomes impossible." - PoincarĂ©.
May in 1976 said the logistic map tells us that even with a simple model (the logistic map) with all parameters specified exactly, prediction is still impossible.
With a bifurcation diagram of R mapped to the X fixed point attractor we can see that as R reaches 3 we get to a period of 2, and as R approaches 4 we get to the onset of chaos at 3.569946 where there is no longer a period (it is infinite). At this point the system reaches the chaotic attractor, or strange attractor.
The logistic map displays the period doubling root to chaos.
The logistic model:
n_t+1 = (birthrate - deathrate) [n_t - (n_t^2 / max_population)]
The logistic map is a famous algorithm in chaos theory:
R = birthrate - deathrate
K = max population (carrying capacity)
X_t =n_t / k
x_t+1 = R (x_t - x_t^2)
Robert May and Mitchell Feigenbaum studied the logistic map.
x is always between 0 and 1.
An example:
R = 2
x_0 = .2 (20% of the carrying capacity)
x_1 = 2 (.2 - .2^2) = .32 (32% of the carrying capacity)
x_2 = 2(.32 -.32^2) = .4352
...
slowly approaches .5 and reaches it at x_7 and it stays at .5 for all the following generations. .5 is called an attractor. In this case it is a fixed point attractor.
A graph of x_t compared to x_t+1 for the logistic map forms a parabola.
The logistic map is a model, a simplified representation of reality.
In a fixed point attractor system, varying R, the birthrate - deathrate, changes the value of the fixed point attractor, which is the percentage of the max population (carrying capacity) the system settles to after sufficient generations, and is referred to as the ultimate dynamics of the system.
In a linear system the whole is the sum of the parts. If 1 produces 5, then 5 produces 25 and 100 produces 500.
But if you add a deathrate due to overgrowding (carrying capacity is the maximum population that can be supported) you get a non-linear system:
n_t+1 = birthrate * [n_t - deaths]
where deaths is defined as n_t^2 / max_population
If you extend it slightly you get the "logistic model" of population growth developed in early 19th century by Verhulst:
n_t+1 = birthrate * n_t - (n_t^2/max_population)
Logistic Population Growth Model (NetLogo)
This model shows a "logistic function", it starts out fast, then slows and ultimately goes flat.
With a certain logistic model, 1 individual produces 7 after 3 steps, but 5 individuals don't produce 35, they produce just 21. So the whole is not just the sum of the parts.
Iteration is doing something over and over again
In population growth, reproduction is iterated.
Simple Population Growth Model (NetLogo)
n is population
n_0 is initial population (1)
n_1 is population at year 2 (2)
n_2 is population at year 3 (4)
n_t is population at year t
birthrate = number of offspring produced per year from 1 rabbit (2)
n_1 = birthrate x n0
n_2 = birthrate x n1
n_t+1 = birthrate x n_t
Exponential population growth: year 0 is 1 year 1 is 2 year 2 is 4 year 3 is 8 year t is 2^t
Exponential growth is unrealistic in the long term.
Population vs. time is an exponential function.
This year vs. last year is a linear function yielding a straight line:
n_t+1 = birthrate * n_t
y = slope * x
Independence in a system causes linear growth.
Question we'll be exploring: How is chaos different than randomness?
Chaos
Chaos is "sensitive dependence on initial conditions".
"Butterfly effect" - a butterfly flapping its wings in Japan could cause changes in weather that eventually lead to a hurricane
Chaos shows up in:
Dynamics is how systems changes over time.
Examples:
Dynamics develops quantitative descriptions of changing systems.
Dynamic systems theory is a vocabulary and branch of math that describes systems changing over time.
Some historical figures and advances in dynamics:
Aristotle (3rd century BC) first described laws of movement (separate laws for heaven and Earth).
Copernicus (16th century) - stationary sun with planets in orbit
Galileo (17th century) - proved Aristotle's laws wrong experimentally
Newton (18th century) - gravity is consistent throughout the universe and concurrently created the Calculus
Laplace (18th century) - Newtonian reduction to causal determinism
PoincarĂ© (19th century) - planted the seed of doubts in reductionism that led to chaos theory, small changes in initial conditions lead to large differences in final outcomes
The world in logo is a coordinate system of patches. Each place in the world is a patch. You can control the size of the world.
ask patches [stuff]
stuff settings for patches:
set pcolor <color>
if <condition> [<..>]
if not any? patches with [<condition>] [stop]
ask turtles [ <stuff> ]
set <variable> <value>
set label <value>
To provide state to an agent:
turtles-own [variable]
Add a slider with the UI to let the user set a variable which can then be used by name in the code.
Add a plot with the UI to watch a variable: sum [food-eaten] of turtles
Netlogo code:
Agents in NetLogo are called "turtles" for historical (Logo) reasons.
You can add buttons to the UI with the GUI.
To create a function:
to <function name="">
...
end
To create a function with a return value:
to-report <function name=""> ... report <return value=""> end
clear-all - clear the world
reset-ticks - ticks back to 0
create-turtles <n> - create n number of agents
tick - increase the tick count by 1
ifelse <condition> [<...>] [<...>] - if condition is true, execute the 1st bracket, otherwise execute the 2nd bracket of code
ask turtles [ <stuff> ]
stuff settings for turtles:
set shape "<shape>"
set size <n>
set color <color>
action settings for turtles:
right <n degrees="">
forward <n steps="">
random <n> - number between 0 and n - 1
Turtles (agents) start in the middle and facing in a random direction.
A button can continue to click itself (such as go) by right clicking it, selecting edit, then clicking the forever checkbox.
The world can have wrapped walls or solid walls. Right click on the world, click edit, then uncheck the wrap horizontal and vertical checkboxes.
Try 3 different evaporation rates of ant pheromone: 0, 5 and 20, to see which results in the ants getting all the food the fastest.
This exercise demonstrates the exploitation vs. exploration tradeoff.
Netlogo is a modeling environment to model and study complex systems.
File -> Models Library to see library of pre-existing models.
Try out Biology -> Ants.
Info tab explains the model.
Interface tab, then press setup, then press go.
Pressing go again starts and stops the model. Sliders control the simulation speed and model variables and outcomes are plotted.
Code tab shows the NetLogo code.
Getting Started
Help -> NetLogo User Manual
Help -> NetLogo Dictionary
1.5x is definitely doable for this kind of lecture content when the audio quality is good and you are familiar with the speaker. Some of the experts videos are Skype calls, and the audio is worse and the speaker is new, so I'm having to do those at 1x speed. A 1.25x option would be nice.
I hear 1.25x is fairly optimal for watching MOOC videos so I opted into the Youtube HTML 5 beta according to the provided instructions to get faster than 1x video playback. That's nice, I didn't know Youtube had an HTML 5 test going on. Flash sucks.
Turns out Youtube doesn't offer 1.25, just 1.5. And turns out I only get the HTML 5 version by clicking the "Watch on Youtube" link in the MOOC, rather than seeing it in the MOOC. Neither is ideal but I'll see how it goes.
Glad to see the Santa Fe Institute is producing their own MOOCs on their own platform. I've long been a fan and was regularly reading their artificial life journals back in the 1990's. I'd forgotten about them until I saw the MOOC mentioned on the self-education subreddit.
There are ~4,000 students signed up for the course on complexity.
Signup was painless, easy and typical. Really just pick a password and confirm your email address and go. I started about a week late so I'll have to catch up, but it was nice that they allowed this and didn't close registration.
The first lecture walked through all the features of their platform (branded "Complexity Explorer"). I thought it'd be a waste of time but the walk-through was very helpful. The platform is very nice and full-featured for something they've done themselves. It compares favorably to the big dogs like Coursera and Udacity.
I'll have to dig into it a little and see if it is truly something they've done on their own or if it's something white labeled off-the-shelf.
Weaver in 1948:
19th century - problems of simplicity
20th century - problems of disorganized complexity (avgs. and statistics)
Science's next frontier is problem of organized complexity which have moderate number of variables, with non-linear interactions leading to emergent behavior.
Here are some paraphrased contemporary definitions of complex systems from people at the Santa Fe Institute.
Krakauer: "Systems that don't have a compact description because they encode long histories from their environment."
Moore: "Questions are complex if they require a lot of resources to answer. There may be no short cut to doing a laborious step-by-step simulation in real-time."
Crutchfield: "A complex system has a sophisticated internal architecture that stores and processes information."
Rundle: "A system with interactions and non-linear elements, often with power laws and/or fractal elements."
Page: "Systems capable of producing complexity have adaptive, responsive entities that are connected in a network, interdependent with each other, and end up being diverse (even if they started out as the same). Sometimes systems with these characteristics end up in an equilibrium or a predictable pattern and so don't result in complexity."
Newman: "A system of many interacting parts where the system shows emergent behavior."
Forrest: "Complex systems have many active, interacting components with non-trivial and non-linear interactions resulting in non-predictable behavior . The components are capable of learning or otherwise modifying their behavior."
Farmer: "A complex system has a lot of interacting parts with emergent phenomenon."
Bettencourt: "Complex systems tend to be made of heterogeneous parts, tend to be open-ended (evolving), and they often have circular casual chains that provide positive and negative feedback loops."
West: "Complex systems involve enormous numbers of agents interacting non-linearly producing emergent phenomenon. Complex systems are evolving and adapting and can't be described with a few simple equations."
Two possible measures of amount of complexity that will be explored in more depth:
There are many others, defining complexity is hard. You know it when you see it.
Methods of complexity science:
Goals of complexity science:
gain cross-disciplinary insights - learn about cities by studying ants, learn about economies by studying brains
develop a general theory of complexity (controversial)
Core disciplines of complexity science:
Examples of emergent behavior:
Properties of complex systems:
Examples of complex systems:
Updated out of date dependencies.
Implemented all the failure cases, and tests for the failure cases, for improperly trying to uncategorize an item.
Test that ClojureScript is setup with a modern setup and working well in the app with an updated browser REPL setup.
Remove some unneeded tests since I decided not to remove a parent categorization when categorizing a child of that parent.
Fix some config stuff to get Travis-CI builds/tests working again.
Cleanup and finish the item categorization tests.
Implemented categorize-item with tests.
Refactored some tests and code to eliminate duplication.
Discovered clojure.zip, a natural tree structure. Should this replace my map/vector/map structure? Seems much nicer at the Clojure level, but what about to/from JSON?
A slightly different question is when I categorize as /tax/a/b/c then suppose uncategorize as /tax/a/b. What's the meaning of that? Should I say, it's not categorized as /tax/a/b so you can't do that? Or instead, do I say it's now categorized as just /tax/a?
Made a decision on these API design questions, then had a discussion with Andrew to ensure his intuitions matched mine for the API design. Decided thusly:
Item will only store the specific category, and will just emit the others where needed in a view. You will only see the specific category that was used when you retrieve the item.
You can only uncategorize a specific category that has been used on the item, not a parent.
You can't categorize a parent of a child that's already been categorized.
Set up the scaffolding for categorize-item and uncategorize-item tests.
Wrote the failure case tests for categorize-item.
Implemented the failure cases for categorize-item.
Thought through what should be in the JSON in CouchDB and REST API and in the data structure for the Clojure API for a categorization of an item. Particularly interesting are the parents of a categorization. Do they appear or are they just implied by the path?
Example:
Categorize an item as: /tax/a/b/c
Do you see:
Or just:
I certainly expect the item to show up when I ask for items of /tax/a/b, but does /tax/a/b need to be in the JSON payload?
It'll need to be emitted in the CouchDB view, so that searches for /tax/a/b will find the items categorized as /tax/a/b/c. That's for certain.
A related question is when I categorize as /tax/a/b/c then I uncategorize. Do I need to uncategorize as /tax/a, or even as /tax? Does uncategorizing as /tax/a/b/c imply it's still a /tax/a/b?
Similarly, if I have an item categorized as /tax/a/b and I then categorize it as /tax/a/b/c is it categorized as both? Or is it no longer categorized as /tax/a/b?
Developed the category-exists functionality and its associated tests.
Got all the tests developed and working for creation of categories in taxonomies with the Clojure API.
Tremendous amount of work (mostly reading, thinking, and sketching) about the data structure and API behind hierarchical categorization in CouchDB, taxonomies, categories and categorizing items. Captured notes in notebook and a new doc including all the actions needed and how those breakdown into Clojure and REST APIs.
Some good ideas in the reading about categorizing items and querying, but decided to stick with keeping the category tree all as a JSON block in the taxonomy rather than breaking it up (relational DB style).
Seems that can only optimize for tree manipulation (by keeping category details out of the item doc) at the expense of easy and fast categorized item retrieval, or optimize for fast and easy retrieval (by putting the category details in the item doc) at the expense of easy and fast tree manipulation. Seems clear that with Falkland CMS the right optimization is for easy insertion, deletion and retrieval of items, not tree manipulation.
More work on category tree manipulation.
Nice to be back at it after some time off to focus exclusively on the day job, but frustrating day due to lack of real progress and the complexity of this problem.
Thoughts by this user that have been liked by others.
No liked thoughts yet.