O

just a test harness?

and DONT FORGET TO TAKE GUITAR BREAKS

ETC

I saw the talk about our terrible tools hindering the chance of anything good being made. maybe I will get some nicer tools soon.

I need to make the jump to pulling pools of perl together like goops in an environment where my creations are mostly data and fractures of data architecture, only occasional hacks on the underlying atomic structure.

taxonomic inspiration: each knowledge thing we can reason to about any $_ is a field - as in a field of knowledge, opposite end to the unified field where everything becomes the same, applying fields is increasing entropy.

the machinery is made of itself asap. we shall call this singularity X.

Post-X must be rules about what can react, creation of new fields.

Next time is about discerning shapes and levels of behaviour and trying to hallucinate visual representation for them and how that would work. need to research animated datastructure toolkits... (like they exist)

God damn those lumps of my intellect before they're broken down into perly contrivances are exactly the thing to cater for. it's all just flowing datastructures. rhythm is the new thing I will be doing, since it's so important to Doing The Right Thing.

the problem is I became an artist. now I know that my ideas are expressions. it is too hard to express them with the material available. lower the barrier for entry, but more importantly shorten the marathon of invention, always trying to beat apathy.

I'd love a command & conquer like map of code, instead of being a huge text file. swampy bits could be shrank up relative to important bits.

ALGORITHMS

data flows through algorithms

an algorithm is a bunch of machine-thinking-ideas graph which can be the unit of testability testable means the nature of what's inside can be experimented upon to optimise it and the surrounding program

all the algorithms are just graphs, into which complexity can be injected. like passing a closure but without making code exponentially ugly.

so at a point of the program there could be yay many algorithms working together like so with some complicated by some need to work together just so it's a tangle of machines and thinking and ideas layered on top of each other graphically

probably we would find that the pattern of node linkage was always very simple and could be simulated behind link() etc API with something faster

streaming algorithms that can make shortcuts for big datasets and resources with timing etc realities

the question is how to express this in graph so it scales... nodes could store links on themselves if it made sense... all these slight tweaks in nature "if it makes sense" are obviously coming from the world of graph self-analysis and optimisation... firstly we should break the /object code into functions which feed each other.. or the getlinks() code, why not... the variables' light cone allows the shape of the algorithm around it to change. inputs and outputs, chains of them

FIELDS

We're using different Graph objects to get a field effect... a complete set of links without neighbour noise

we should be able to say this part of the graph is now a field... if an algorithm is entropied by the limit of the field, this is shown as a bulge or something, somewhere links to within from outside and vice versa should be seen field secretiveness (relative to anything?) so is creating a field over nodes cloning them? with backlinking? perhaps you create the selection first then say clone perhaps something can slither out of the way, of course it can...

So there's usually just one graph per App App would branch away its various data but it's one bunch of links a perl program in this new world would be more like a daemon serving the illusion of several Apps and whatever they need, down to the existing tech

but floatilla of algorithms and junk

each node in the subset could be linked to the original in secret way (if a query asked it could traverse back into the original graph) once we have subsets carved out we can run them into algorithms easier

there's kind of a turf concept going on... various customs for travellers...

now we're using IDs to find things in closed circuits of links...

HMM

so like protein coders that want to survive, the base apparition creates a luxurious overworld

"if it lives, it lives" is the base statistical point to life, humanity and its complexity is just increasing luxury erupting on top

I believe I am sewing mental dischord sometimes but I am working it okay

entropy can't be explained, the teacher will go off on tangent after tangent, while the student gets more and more full of unresolving notionettes.

head1 THIRD

linkery can be on each object or in the fields link store revelation 1 is links-on-objects. r1 is to get a graph of the codey rules leading to r2 r1 holds objects to represent itself in an extensible manner

for any function there's a physical medium/machinery for data flow and action on top of that something says "this is this kind of idea and why" and the idea itself is applied to the machinery it will effect

machinery (for data flow) thinking (on the machinery) ideas (what stuff is for)

so ideas need to cosy up to machinery real nicely, thinking could get complicated adapting things (had gone on a tangent here) so through their shared ideas (and/or via thinking) different machinery can work together. there might be an ambient idea realiser (optimisation ideas...) in r2 pattern matching... like how ideas look for reality to improve on

concurrent graph pattern matching needed... pattern matching means a chunk of graph exists, for example execution is at this point about this kind of machinery/data/etc...

argumentative language is machinery, which invokes ideas through fractured thinking.

so we needa take data to make machines to make thinking and ideas and more graph computer complexity.

machine says take this data, munge it like so, put it here.

there's a machine around the machine, the execution environ, where data is looked for and dumped out. in this environ there's arms and eyeballs protruding in on the swarming machines.

hmm like ideas get more beautifully meaningful-per-word, the rig of r2 dangles increasingly greasey code down to perl

trying to see the shape of things where code becomes machine! ah long lost reality.

At Field Hutt Josh says "massive chunks" and my being ignites, salivating... We're dogs with jobs.

on search: the result graph is result nodes containing the matched thing... IT IS DETERMINED somewhere before execution of the receiving function whether that thing wants the result node or the thing this situation can be SEEN, because it's all graph all the time. the resolve could be thinking, machine-generated in test cases.

test cases should be a major part of generating the program as well as proving it. say "take [a,b,c]s and {etc}". etc code has sanity checks mixed in with function and either. like /(.+) etc/ || die

so what does r2 want to do? pattern matches to: notice where/when/what we are and mess with things accordingly each of these messings creates a mess, in theory, but it's better to have messy theory and clean practice since you can't see theory. mostly we shall just load up the graph technology itself with little complications. that graph generates code that we hop over to.

so r1 allows the graph tech to be loaded up with complications and then that generates r2 graph tech code. r2 graph tech supports all the sweet functionality. r2 graph computer? what decides what gets executed? well for development, tests and web client requests will do...

the r2 graph computer is just another machine that gets executed. I wonder if it implements the thinking-idea relative strangeness or that stuff could be r1 complicated it could perhaps not be required for some initial r2 functionality? perhaps it is a mistake to avoid putting things in r1? what shape is r1 really? if there's anything to do it can get done first hand the point of separating machine-thinking-idea is to clarify the engineering and also open possibilities of computer understanding the graph computer is an ambient dude that waits for the world to get to it? aha no yeah its state wants to be seen by things wanting to do things at certain points of computation

anyway we will say we want to run this machine here -> "lastfm submit scrobbler.log" and it executes its machine pulls in other machines through ideas etc etc. ah so a machine graph can specify specific other machines too somehow, if it can locate them somehow... you could also say NO, not that machine, this machine, for it is an alike idea, but adapt the input/output data flow with this machine, etc.

can I turn my typing-letters-in-order skills into a profitable business? no, probably just a bizarre bunch of material (art?)

the interface is for capturing material to compute and impressing the user.

SO

a lexical scope, adapted into a graph function, is a sorta field inherited variables are adapted with thinking...

it's feeling more musical for further along this cut.

keeping time with a six minute track on loop.

making testery for graph, represented as ui drawing instructions. that stuff will eventually be runnable/always run and visible in the ui. twould be good to have ui in one process and fork for tests. run another perl process altogether and get output? get output how? database time? nah. fuck that, just ui runnable test routines would be the ticket. then those routines could elabourate into machine clusterations the tests are the routines right.

- limber up graph happening. thawed graph bootstraps itself with links from nowhere to all its nodes. usually there'd be links from at least somewhere philosophical to everything talking about it relative to various formulas

there's definitely that machine-thinking-ideas pattern awaiting elegant apparition.

TODO

need to get whole svg state from test script back to the scope, so it can be seen should probably rejig the ui a bit now it's more stable to support... - breadcrumb history - yadda - dump depth limiting - exceptions to said - those exceptions a kind of view port - view ports - view ports by graph pattern like G(webbery)/filesystem - tests run from ui - butter hacks managed in butter ui itself - to deploy a new butter ui the new process requests to the old process to die - leaving it necessary state to keep on truckin

seeing things happening: lets make the get_object: make drawing cats: drawings animations removals gen, diff, fillin svg collecting them things and clearing, statusing... should be able to see these things happening in graph... see the execution of get_object at all points resume in there and interrogate data

the scope is for navigating through dimensions and aspects of program being

code becomes code graph - artistic code structure abstractions like shrinking vast uglies - diagram/maps of code - invent: click link in browser, gvim goes to the line of code

gvim still hacks up text file, changes migrated into code graph - needs to be visual if it's tricky or if user wants to - this could stretch right down to git - the whole user's interaction is an evolving tree of actions

avoid changing line numbers

insert break/dump points, run code as another entity

need to have each sub log somewhere what it does

then we have this huge shape of the program's execution - look for patterns

these patterns involve some data nature and some program nature

we can look for places where svg gets linked to anything one butter creates another butter for increasing experiment like an intellectual caterpillar slipping into the future

so we need to automatically hack hooks into butter and execute it

the hooks note whats happening back to the past hopefully the program can be re-executed and happen exactly the same so we can go up to a point and hit pause simple

entropy field == light cone for data? with total functional entropy figured by watching execution of test cases hmm indeed

queries are middle managers. if the top boss works infinitely fast as computers do he can deal with everything.

an invention: render shapes at random, endeavouring genetically using human to point out bits that look like this or that build with that a visual vocabulary use that to generate a noisy field that could be interpreted by four square orientations into four sequences of a comic all this kind of formulaicism if you see the images in different order is your brain effectively prepared differently

looking at the screen vs not. may be something to experiment with.

summarise and this little thing growing in note() are a case of data massage from one set into a string various ways, pulling out extra bits of info or recursing deeper in, always with some limitation seems like the kind of thing to take care of easily somehow

it's all about massaging data

click click

make a tool for capturing the path from one point in the graph to another

make a tool for drawing links

get_object turns into a dispatcher

make them dispatch tables/graphs for get_object so we can start hacking on it fast

get_object will get the object of stylehouse in to being

life works through its means

structures for structures. perhaps it wuld be more playful and fun and maybe even faster to build something for sheer graph gaming, then distill a stylehouse from what's possible to imitate in there. probably just an insightful exercise...

we are all stylehouses. get to the point of freedom of expression where you're just applying styles. it's not the end, it's just the stylehouse. what then.

FOURTH

we want to find patterns in the notation: everything under a certain call becomes a map apply the map to another certain call, if things are relatively the same then alright! that would take a lot of coding but not much user clicking around time

code need tons of intellect put into it or it can be more open to the user's intellect

COMPLICATIONS

do_stuff() takes a $P-rogram graph limb, calls mach hooked in there

they can mess with the nature of what calls them messing with the caller is hacked in right now cause what calls them isn't graphy algorithm yet

eventually the complication would be an algorithm complete with how to mess with the caller, etc etc. and of course sanity/test cases...

so eventually there'd be a whole lot of things complicating the Graph/Node infrastructure until its nature is juicy enough to do everything we want to do with it

tempting to say it would rewrite itself in perl with less hooking... its actions would have to be able to be in $E if bug chasing got down to it tracking calls beyond eg getlinks() is a waste of time, except for sometimes maybe

it's a computer mind that can be forked, tested and suddenly begin real work on real data

a lot of short term complications will be, for doing stuff through time, like later_id_remover() which is where making things algorithms can make things more semantically simple cause then they can be stretched out through time with obviousness

REAL DATA

some user input etc. should be saved into yaml datasheets.

ANYWAY

the $P graph is a table of contents for the program

hacks are expressed as machs attached to the $P graph

the $E graph is execution state/history a call stack down to a point pointers to dead graph etc. a way to pause, fork, change, resume continuously as a sexy debugging process

I suppose within the $E flow will be various entropy-related forces of execution

also the $U graph, stuff the user is doing?

lets make the code for complications faster, caching them into perl data from graph masters then get on with USING complixity to enhance enhance etc

trippyboxen needs config saving abilities which involves finding paths into the code to various numbers

need a way to graph code such that the user can click on things to give meaning so code becomes lines lines become bits alterations attach to bits meaning attaches also to bits meanings create thinking to ideas, and we have an algorithm make another algorithm that connects them algorithms need these meaning mappings to break upon refactoring meaning can then be mapped into a meaningful UI for eg synth playing

make get_object use transparent not-graph stuff for its algorithming... encapsulate the algorithm as a light cone entropy field whatever the code shines lights through the API, through time, it's about connecting those beams

from the forray of graphing codes... codes() gets data, short_codegraph() chews it up a bit for #thecodegraph, #codegraph_ancode digs out the actual code back from G(codes) probably lets... user wires meaning into the codes user fucks with frankenbutter codes frankenbutter behaviour observed behaviour mapped to fuckery like saving synth operations programs synthesise data

also lets... define some datatypes eg a node, perl code, a list or a search() result for svgering so while experimenting we can just chuck things out to the scope G(scope): "toolbar" -> elements "exam" -> elements where the elements are scope data objects (connected with layout sensibilities) we complicate unlinks in this graph to generate element removes things that want animation can complicate for it has begun but need to delete "exam" when it's no longer the object of attention...

expand upon #notation so we can use it to watch frankenbutter tick it's the beginning of the action/attention machine to understand its own functioning understand meaning to be able to present it to the developer so they can understand it's a symbiosis of three: franken does stuff butter interprets notation developer sees patterns and applies the human mind

create some notation maps, known low-level trees to fold up and some big-deals to draw bigger, eg Web->run, get_object requests see frankenbutter execution add resume/breakpoints control via notation() run webserver while broken? interrogate franken being via port 3001 from butter

console.log when a remove or animate doesn't find its target

make field membership a hash in the node for fast association (and use it so)

the linguotic web of program the attention that shines on patches of structure the actions that build graph

so features complicate "public" program bits until common ground is laterally established

WORDS