Page 245 of 249

### Re: Coding: Fleeting Thoughts

Posted: Tue Dec 05, 2017 2:01 am UTC
Xeio wrote:Todays Google doodle is nice.

They appear to measure “shortest solution” in terms of fewest instructions, not least movement.

### Re: Coding: Fleeting Thoughts

Posted: Tue Dec 05, 2017 4:08 am UTC
Qaanol wrote:
Xeio wrote:Todays Google doodle is nice.

They appear to measure “shortest solution” in terms of fewest instructions, not least movement.

also, the "shortest solution" is wrong for the last one.

edit #1: same for number 4.

edit #2: and number 5.

Spoiler:
number 4, 6 instructions instead of 7: { { forward, forward, right } loop 4 times, left } loop 3 times

number 5, 5 instructions instead of 6: { { forward, right } loop 10 times, left } loop 18 times

number 6, 5 instructions instead of 6: { forward, right, forward, forward } loop 11 times

### Re: Coding: Fleeting Thoughts

Posted: Tue Dec 05, 2017 10:16 am UTC
hotaru wrote:
Qaanol wrote:
Xeio wrote:Todays Google doodle is nice.

Spoiler:
number 6, 5 instructions instead of 6: { forward, right, forward, forward } loop 11 times

there's a solution in 4. (hint: you're doing something 3 times)

### Re: Coding: Fleeting Thoughts

Posted: Mon Dec 11, 2017 9:06 pm UTC
Xanthir wrote:You might be interested in looking into Streams, like RxJS and the like. The concepts are very similar, and they've thought about many of these problems already.

I've poked at them, but documentation tends to focus on *using* them, not how they where written and how they work.

A lot of what they are doing -- passing key-value property bundles around -- isn't what I'm aiming for. I want type safety and argument descriptions and the like.

Also, they seem OO; and I've been trying to be functional instead of OO. My pipe/source/sink elements are fundamentally *functions* (with a bit of type-dressing), RxJS seems to have *objects* as its fundamental unit. I can see how to solve it with objects; maybe I should just bite the bullet, create real objects with a richer API, and just have factory-from-function when users don't care?

### Re: Coding: Fleeting Thoughts

Posted: Tue Dec 12, 2017 12:39 pm UTC
Yakk wrote:I've poked at them, but documentation tends to focus on *using* them, not how they where written and how they work.

Well, they are open source...

I'm not entriely sure I read your notation right, but you seem to try mixing the two fundamental control principles of streams: push and pull.

In push-based systems, the Sources will generate new data whenever they feel like it, and call into their sources with the new data. This is useful in complex event processing, e.g. when data arrives in near real-time from sensors.

In pull-based systems, the call stack is the other way around. You call into your sources, requesting a piece of data, which in turn request data from their sources etc. This is commonly called an operator graph in database systems. Because your sink will only request as much data as it needs, this allows early aborting of queries suffixed with 'LIMIT n'. It's also used anywhere you get an Iterator.
Pull-based systems require all data to be available when requested, e.g. in memory or on a local disk. You can kinda get around that in a language with async/await or something, but that'll only turn your source<T> into source<Promise<T>>, and then you're usually better off with a push-based system.

In push-based systems, no pipe or sink needs to know about its sources. In a pull-based system, no pipe or source needs to know about its sinks. Your definitions of source and sink seem to imply a push-based system, but then your pipes get a reference to both a source and a sink, and that seems weird.

You can try to create a system that somehow does both (rxjs appears to do so), but (from a cursory glance) rxjs just implements two systems in the same library with similar APIs. Pick one and start with it.

You can do everything functional, but IMHO it's easier to model as a (acyclic, connected, directed) graph of nodes, each being an object with a standardized API, like push<T> or observe<callback<T>> (for push-based) or get<T> or getIterator<T> (for pull-based). Passing the connections in a type-safe way in the constructor just seems cleaner and more robust than mixing connections and data in an arbitrary argument list.

### Re: Coding: Fleeting Thoughts

Posted: Wed Dec 13, 2017 4:17 pm UTC
Tub wrote:
Yakk wrote:I've poked at them, but documentation tends to focus on *using* them, not how they where written and how they work.

Well, they are open source...

And so is Linux. That doesn't mean it is reasonable to learn how Linux kernel overall architecture design by reading the source.
I'm not entriely sure I read your notation right, but you seem to try mixing the two fundamental control principles of streams: push and pull.

So, I'm using continuation passing style.

A function that returns X instead takes a function that is passed X.

This permits a few things. First, a function can return *more than once*, or zero times. Second, you can return *view types* into stack data.

The first seems useful for streaming operations; sometimes one unit of data becomes 3 or 0. The second gets rid of an inefficiency in returning data of statically dynamic size (static in implementation; dynamic in interface).

In push-based systems, the Sources will generate new data whenever they feel like it, and call into their sources with the new data. This is useful in complex event processing, e.g. when data arrives in near real-time from sensors.

I think you mean "call into their sinks".
In pull-based systems, the call stack is the other way around. You call into your sources, requesting a piece of data, which in turn request data from their sources etc. This is commonly called an operator graph in database systems. Because your sink will only request as much data as it needs, this allows early aborting of queries suffixed with 'LIMIT n'. It's also used anywhere you get an Iterator.
Pull-based systems require all data to be available when requested, e.g. in memory or on a local disk. You can kinda get around that in a language with async/await or something, but that'll only turn your source<T> into source<Promise<T>>, and then you're usually better off with a push-based system.

Mine is a pull-based system with continuation passing style.
In push-based systems, no pipe or sink needs to know about its sources. In a pull-based system, no pipe or source needs to know about its sinks. Your definitions of source and sink seem to imply a push-based system, but then your pipes get a reference to both a source and a sink, and that seems weird.

The sink is only known about insofar as I *return through it*.

Instead of T(U), we get void(U, continuation(T)). The transformation of (U->T) to (U->(T->void)->void) is a brainless one. Except we get (U->T*) to (U->(T->void)->void) out of it for free.
You can do everything functional, but IMHO it's easier to model as a (acyclic, connected, directed) graph of nodes, each being an object with a standardized API, like push<T> or observe<callback<T>> (for push-based) or get<T> or getIterator<T> (for pull-based). Passing the connections in a type-safe way in the constructor just seems cleaner and more robust than mixing connections and data in an arbitrary argument list.

There isn't any data in my system. Pipe's don't *take data*, they take connections.

Source, Sink and Pipe are 3 kinds of graphs. They can be composed in certain ways. If you connect a source to a pipe, you get a source of the pipe's output. If you connect a pipe to a sink, you get a sink of the pipe's input. If you connect a pipe to a pipe, you get a pipe.

If you connect a source to a sink, you get an executable object. Running it executes the graph.

You can work with any of the 3 types of graphs. You can consume data from a source by passing in a continuation. You can provide data to a sink. You can do both (at once) to a pipe to process data.

The fact that nodes are also sinks/sources or pipes simply reflects that a node is a graph.

The syntax ends up being pretty clean, and as I don't type erase unless requested everything can be inlined.

What I'm missing is a clean way to handle multiple inputs/outputs. Reading the react streams has given me a few thoughts, but their system seems really tied to timers and async and property bags and all that mess.

### Re: Coding: Fleeting Thoughts

Posted: Wed Dec 13, 2017 11:12 pm UTC
Yakk wrote:
Tub wrote:Well, they are open source...

And so is Linux. That doesn't mean it is reasonable to learn how Linux kernel overall architecture design by reading the source.

Eh, if you wish to save time, just skip the comments.

Yakk wrote:Source, Sink and Pipe are 3 kinds of graphs. They can be composed in certain ways. If you connect a source to a pipe, you get a source of the pipe's output. If you connect a pipe to a sink, you get a sink of the pipe's input. If you connect a pipe to a pipe, you get a pipe.

Yeah, but that's exactly the part where I think you're overcomplicating things. You wish for the ability to extend the graph on all ends, and that's going to end up more complicated than a graph that you can recursively create bottom-up, or maybe iteratively create bottom-up in a long chained method call. A single data type for nodes ("source" when pull-based) should suffice, and it makes design and reasoning a lot simpler.

I've worked with quite a bit of operator graphs, iterator implementations, workflow systems and other "stream" implementations, and in exactly none of them did anyone see a requirement to create three different data types, or to extend graphs on multiple sides. I'm sure it can be done (though I'm too rusty to do it in a functional language), but I wonder why one should.

Yakk wrote:What I'm missing is a clean way to handle multiple inputs/outputs.

Yeah, that seems easier in an OO language where you just slap an array of inputs/outputs on each object. It's also easy in a functional language if you restrict yourself to constructing the graph from one side. What you're trying in a functional language.. I see where your troubles come from.

### Re: Coding: Fleeting Thoughts

Posted: Thu Dec 14, 2017 4:40 am UTC
The heck... how the hell does an execution plan for an Index Scan read 2+ million rows from an index for a table with only 10 thousand rows?

And then when I drop and recreate the index the problem magically goes away...

### Re: Coding: Fleeting Thoughts

Posted: Thu Dec 14, 2017 5:26 am UTC
Inaccurate statitics. Dropping and recreating the index rebuilds the statistics too. Why so many rows? Depends. If it's a loop, it could be reading the same rows multiple times and then filtering them out. Sometimes, I've seen SQL Server do a cartesian join and then filter when it could have done a merge join because fuck you that's why.

SQL Server can be quirky with it's execution plans. For example, when it joins it always assumes some rows won't join; we had a huge staging database where we had several million rows per day per table, each record having a 1:1 correspondence, with hundreds of tables. After 5 or 6 joins, it will assume only 1 row will come back and optimize the execution plan accordingly - index seek, nested loops. The only way I could fix it was join hints.

### Re: Coding: Fleeting Thoughts

Posted: Thu Dec 14, 2017 5:53 am UTC
No loop, just a "where in" clause which apparently caused the actual read rows to grow exponentially as the number of items in the list grew.

Granted, it was still slow-ish after the index rebuild so I twerked the way I was requesting the data from the entity framework. Now it just does some voodoo magic with an IQuerable rather than a list which turned the "where in" clause into a straight inner join which is neat.

So I learned a little about leaving something as IQueryable that I don't necessarily need.

### Re: Coding: Fleeting Thoughts

Posted: Thu Dec 14, 2017 6:21 am UTC
I meant a nested loop join in the execution plan; they can result in the table being read multiple times in some situations. For example, the outer table has a 10,000 rows, the subquery returns 200 rows, and for each record in the outer table it reads all 200 rows from the inner table (subquery result), joins them, and then filters the result.

### Re: Coding: Fleeting Thoughts

Posted: Thu Dec 14, 2017 4:28 pm UTC
Xeio wrote:No loop, just a "where in" clause which apparently caused the actual read rows to grow exponentially as the number of items in the list grew.

WHERE IN (1, 2, 3, 4) or WHERE IN (SELECT ...) ?

The first shouldn't be much of a problem for short lists, but you'd need to figure out if longer lists get properly indexed so they can be joined. If the optimizer translates that into (WHERE a == 1 OR a == 2 OR a == 3 OR a == 4), that'd be bad. Temporary tables (or, apparently, IQueryable) can help.

The latter is a dependant subquery, and using those means taunting the optimizer. Make sure the query optimizer actually converts the dependant subquery into a proper join. Better yet, rewrite the query to be an actual join. Dependant subqueries aren't always optimized well; if the subquery is executed once per row then you'll see what you're seeing.

### Re: Coding: Fleeting Thoughts

Posted: Thu Dec 14, 2017 6:32 pm UTC
Tub wrote:WHERE IN (1, 2, 3, 4) or WHERE IN (SELECT ...) ?

The first shouldn't be much of a problem for short lists, but you'd need to figure out if longer lists get properly indexed so they can be joined. If the optimizer translates that into (WHERE a == 1 OR a == 2 OR a == 3 OR a == 4), that'd be bad. Temporary tables (or, apparently, IQueryable) can help.

The latter is a dependant subquery, and using those means taunting the optimizer. Make sure the query optimizer actually converts the dependant subquery into a proper join. Better yet, rewrite the query to be an actual join. Dependant subqueries aren't always optimized well; if the subquery is executed once per row then you'll see what you're seeing.

For comparison you can look at the queries (except for the 3rd "select *" line this is all SQL generated by entity framework):

Slower Where In clause:

Code: Select all

`SELECT [p].[Id], [p].[ApiKeyId], [p].[ExpiresIn], [p].[ItemID], [p].[Marks], [p].[Time], [p.Item].[ID], [p.Item].[IsExtraordinary], [p.Item].[ItemCategory], [p.Item].[Name], [p.Item].[Rarity]FROM (    select * from Prices where Id in (select max(Id) from Prices group by ItemId)) AS [p]INNER JOIN [Items] AS [p.Item] ON [p].[ItemID] = [p.Item].[ID]WHERE [p].[ItemID] IN (1871, 1898, 1908, 2063, 1909, 1918, 1915, 2056, 1893, 1913, 1892, 1873, 1904, 2066, 2062, 1919, 1911, 2058, 1891, 1906, 1914, 1899, 1901, 1905, 1916, 1896, 2060, 1894, 2057, 1870, 1920, 2064, 1903, 1900, 1872, 1895, 1912, 1907, 2065, 2059, 1897, 1917, 1973)ORDER BY [p.Item].[Rarity] DESC, [p.Item].[Name]`

Code: Select all

`SELECT [p].[Id], [p].[ApiKeyId], [p].[ExpiresIn], [p].[ItemID], [p].[Marks], [p].[Time], [p.Item].[ID], [p.Item].[IsExtraordinary], [p.Item].[ItemCategory], [p.Item].[Name], [p.Item].[Rarity]FROM (    select * from Prices where Id in (select max(Id) from Prices group by ItemId)) AS [p]INNER JOIN [Items] AS [p.Item] ON [p].[ItemID] = [p.Item].[ID]WHERE [p].[ItemID] IN (    SELECT TOP(40) [i].[ID]    FROM [Items] AS [i]    WHERE (CHARINDEX(@__name_1, [i].[Name]) > 0) OR (@__name_1 = N'')    ORDER BY [i].[Name])ORDER BY [p.Item].[Rarity] DESC, [p.Item].[Name]`

### Re: Coding: Fleeting Thoughts

Posted: Thu Dec 14, 2017 9:33 pm UTC
So thinking about those graphs as functions has helped.

There are a bunch of ways to join single-argument functions:

Code: Select all

`U->T* + U->V* = U->(T  or   V)*U->T  + U->V  = U->(T  and  V)U->T* + U->V* = U->(T* and V*)U->T* + U->V* = U->((T and V)* and (T* or U*))A->B  + U->T  = (A or  U)->(B or  T)A->B  + U->T  = (A and U)->(B and T)`

which correspond to certain graph transformations.

I'll note my decision to have a U->T* primitive instead of a U->T primitive prevents some of the simpler transformations above (tupling output) from working.

There are probably more simple ways to "add" functions together to run in parallel. There is also the idea of splitting functions into two pieces (a function that returns both A and B)

Continuation Passing Style only "really" supports arguments with a * at the end.

U->T* + U->V* = U->(T or V)* is easy,
A->B* + U->T* = (A or U)->(B or T)* is also easy
A->B* + U->T* = (A and U)->(B or T)*

So I think I can conclude that by restricting myself to T* return values (0 or more copies of the return value), I cannot easily manage the "regrouping" into tuples without doing allocation while passing.

Unless I can think of a better way to express (X* and Y*) other than (X or Y)*.

Maybe I should just embrace (X or Y)* -- the idea that streams get individual changes. Mayhap what I need is "word" barriers or something, and other than that there is no atomic changing of more than one thing.

So unpacking a tuple is (As and ...)->(As or ...)*

### Re: Coding: Fleeting Thoughts

Posted: Thu Dec 14, 2017 9:42 pm UTC

If that is causing problems, the Prices subquery is a good candidate for an indexed view.

But this made me throw up in my mouth:

Code: Select all

`    SELECT TOP(40) [i].[ID]    FROM [Items] AS [i]    WHERE (CHARINDEX(@__name_1, [i].[Name]) > 0) OR (@__name_1 = N'')    ORDER BY [i].[Name]`

### Re: Coding: Fleeting Thoughts

Posted: Thu Dec 14, 2017 9:57 pm UTC
Thesh wrote:If that is causing problems, the Prices subquery is a good candidate for an indexed view.
Initially I had assumed that would have to be the slowest part of the query but I ran into some other performance problems (including the above) that dwarfed any issue the subquery may have caused.

I've still been thinking about making the "most recent price for each item" a table of its own and just using a trigger to keep it up-to-date, but I'm reasonably happy with performance at the moment.

It no longer takes >1s to render (@Html.DisplayFor IS SLOOOOW) or >6s to do a search (before I rebuilt the index and changed the query).

Thesh wrote:But this made me throw up in my mouth:

Code: Select all

`    SELECT TOP(40) [i].[ID]    FROM [Items] AS [i]    WHERE (CHARINDEX(@__name_1, [i].[Name]) > 0) OR (@__name_1 = N'')    ORDER BY [i].[Name]`

Yeah, string contains is pretty ugly in particular. The LINQ isn't so bad though.

Code: Select all

`var items = _marketContext.Items    .Where(i => i.Name.Contains(name))    .OrderBy(i => i.Name)    .Take(60)    .Select(i => i.ID);var prices = await _marketContext.Prices    .FromSql("select * from Prices where Id in (select max(Id) from Prices group by ItemId)")    .Include(p => p.Item)    .OrderByDescending(p => p.Item.Rarity)    .ThenBy(p => p.Item.Name)    .Where(p => items.Contains(p.ItemID))    .AsNoTracking()    .ToListAsync();`

### Re: Coding: Fleeting Thoughts

Posted: Fri Feb 02, 2018 12:50 am UTC
Noob question:
I want to create a 3D array in Python 3...
I guess arrays are called lists, but I don’t really understand them.
I really want to just be able to do this:

Code: Select all

`int blueprint[3][5][3];blueprint[][][] = //stuff`

Is there anyway to do something like this in python?
All I want to do is fill it with ones and zeroes...
“1,0,0,
1,0,0,
1,0,0,
1,0,0,
1,0,0”
Etc.

### Re: Coding: Fleeting Thoughts

Posted: Fri Feb 02, 2018 4:13 am UTC
If you want vanilla Python, you have to do it by hand:

Code: Select all

`blueprint = [None]*3for x in range(3):    blueprint[x] = [None]*5    for y in range(5):        blueprint[x][y] = [0]*3        # orblueprint = [[[0]*3 for _ in range(5)] for _ in range(3)]`

Done, that'll make a 3x5x3 array filled with zeros. (The initial Nones on the earlier levels are just to reserve space; they all get replaced with sublists.)

(Note: be careful when multiplying lists like this - it copies the values by reference. That's fine with primitive values like None or integers, as they're all identical anyway, but if you create an actual object it'll just give you multiple pointers to the same object. That's why I didn't do something like [[[0]*3]*5]*3 - all 15 second-level list would be pointers to the exact same [0,0,0] list.

### Re: Coding: Fleeting Thoughts

Posted: Fri Feb 02, 2018 10:52 am UTC
Would I then have to go through and specify each and ever “1” in the list?

Code: Select all

`blueprint[3][1][1] = “1”blueprint[3][2][1] = “1”//etc.`

Is there an easier/more efficient way of doing this using a different type of variable? Would a tuple work? I don’t need to or want to change anything in the blueprint.
What would be the best way of going about doing what I want? Is it the 3D list?

### Re: Coding: Fleeting Thoughts

Posted: Fri Feb 02, 2018 11:55 am UTC
Since your first and second dimension stay the same, you keep the pattern

Code: Select all

`blueprint = [ [somethingSomething for _ in range(5)] for _ in range(3)]`

But instead of [0]*3 (a list of 3 zeroes) you want a list of [1,0,0].

### Re: Coding: Fleeting Thoughts

Posted: Fri Feb 02, 2018 11:59 am UTC
And if my list changes? Do I have to fill it in row by row?

### Re: Coding: Fleeting Thoughts

Posted: Fri Feb 02, 2018 12:36 pm UTC
What kind of change are you thinking of? If you want to build a different "3D list" out of blueprint (i.e. functional style: you don't change data, you make a copy with differences), you can map over all three dimensions and change elements where needed:

Code: Select all

`myPrint = [ [ [ value+1 for value in row ] for row in matrix ] for matrix in blueprint ] #add 1 to every single element# or if you need their indicesmyPrint = [ [ [ i+j+k for (i,value) in enumerate(row) ] for (j,row) in enumerate(matrix) ] for (k,matrix) in enumerate(blueprint) ] #make every single element the sum of its indices# well, by now the lines gets so long that you either want to split it into multiple lines with indentationmyPrint = [ [ [        1 if i == j else value #set the diagonals of all matrices to 1      for (i,value) in enumerate(row) ]    for (j,row) in enumerate(matrix) ]  for (k,matrix) in enumerate(blueprint) ]# or use classical for loops after all (which does require you to specify the lists yourselfmyPrint = []for (k,matrix) in enumerate(blueprint):    myPrint[k] = []    for (j,row) in enumerate(matrix):        myPrint[k][j] = []        for (i,value) in enumerate(row):           myPrint[k][j][i] = i+j+k #make every single element the sum of its indices`

If you want to modify some elements of the existing blueprint, you can use the above classical for loops on blueprint instead of myPrint.

(You can also reassign blueprint with a modified copy, but this only changes blueprint from then on. E.g.

Code: Select all

`# something something blueprintsomeVariable = blueprintblueprint = [ [ [ ... ] ... ] ... ]`

someVariable still has the old list of lists, not the new one.)

By the way, note that indices in python are zero-based.

### Re: Coding: Fleeting Thoughts

Posted: Fri Feb 02, 2018 1:10 pm UTC
Peaceful Whale wrote:And if my list changes? Do I have to fill it in row by row?

Oops... I meant doesn’t change... I do not know how that happened.

Thank you though for the information! It still helps!

### Re: Coding: Fleeting Thoughts

Posted: Fri Feb 02, 2018 1:27 pm UTC
Wait, I actually meant what I said, but I didn’t make myself clear. Sorry about that.

Ahem, let me get it right this time.

I want to create a 3D variable, that doesn’t change, but each row is different... is there anyway to declare the way you do in C?

I’m coming from a C background, and the new variable types in Python are confusing...
I wish I could just do this:

Code: Select all

`int cat[5][2][3] = {//stuff goes here};`

### Re: Coding: Fleeting Thoughts

Posted: Fri Feb 02, 2018 1:49 pm UTC
I started learning Python this week in my spare time, but I *think* you can do:

Code: Select all

`cat = [  [[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0]],  [[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0]],  [[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0]],]`

And that's a 3D array, first dimension is 3 elements, second is 5 elements, third is 3 elements.

I'm at work but if that doesn't work I'll fire up a VM and see what I can do.

### Re: Coding: Fleeting Thoughts

Posted: Fri Feb 02, 2018 2:03 pm UTC
Thanks... I know python is really weird when it comes to white space... and tabs... and having not curly brackets EVER...

Is it possible to make it pretty like this?

Code: Select all

`cat = [  [[0,0,0],  [0,0,0],  [0,0,0],  [0,0,0],  [0,0,0]],    [[0,0,0],  [0,0,0],  [0,0,0],  [0,0,0],  [0,0,0]],    [[0,0,0],  [0,0,0],  [0,0,0],  [0,0,0],  [0,0,0]],]`

Or would that just mess up the 3D dimension...

### Re: Coding: Fleeting Thoughts

Posted: Fri Feb 02, 2018 2:05 pm UTC
That should work fine, if you prefer it laid out that way. Logically it's the same.

### Re: Coding: Fleeting Thoughts

Posted: Fri Feb 02, 2018 2:30 pm UTC
Thank you!

### Re: Coding: Fleeting Thoughts

Posted: Fri Feb 02, 2018 2:58 pm UTC
Oh haha, I didn't expect you to ask about array list literals. Indeed, you can write those wherever you want however you want and with whatever you want inside (including more lists).

### Re: Coding: Fleeting Thoughts

Posted: Fri Feb 02, 2018 7:51 pm UTC
Oooooooh!

Pythons lack of initializing variables is weird...

### Re: Coding: Fleeting Thoughts

Posted: Fri Feb 02, 2018 7:52 pm UTC
Thanks for the help!

(I’m making a 3D Conway’s game of life in Minecraft pi edition... but right now I’m working on making a gaint stadium.)

### Re: Coding: Fleeting Thoughts

Posted: Sat Feb 03, 2018 9:11 pm UTC
Ok, I got my stadium generator working, thank you everyone!

### Re: Coding: Fleeting Thoughts

Posted: Sun Feb 04, 2018 3:58 am UTC
Peaceful Whale wrote:
Thanks... I know python is really weird when it comes to white space...

Python cares about whitespace for indenting statements, but that's about it. Within a line, or for indenting continuing lines of the same statement, it doesn't really care. Since you're inside the brackets it's all part of one statement, and you can indent it however you want. There are style guides and conventions of course, but the interpreter will let you do whatever.

### Re: Coding: Fleeting Thoughts

Posted: Sun Feb 04, 2018 10:09 am UTC
Just posting here to say: I made a really very simplify program only once. I programmed it to say, 'Hi [whatever name you type into a dialogue box].' And it was so enchanting and charming to see, 'Hi Ginger,' on my screen every time I use that program yet... I dunno how to program/code anymore? </3 :(

### Re: Coding: Fleeting Thoughts

Posted: Sun Feb 04, 2018 12:51 pm UTC
Ginger wrote:Just posting here to say: I made a really very simplify program only once. I programmed it to say, 'Hi [whatever name you type into a dialogue box].' And it was so enchanting and charming to see, 'Hi Ginger,' on my screen every time I use that program yet... I dunno how to program/code anymore? </3

Well, whatever language your using it’s basically the same.

If you have a C compiler:

Code: Select all

`#include <stdio.h>int main(){//get a variable for the nameint name[20];printf("Please enter your name\n"); // slash n makes a new linescanf("%s",&name); //get the name and set it to the name variable.printf("hello %s",name); // %c is a variable placeholder. That is where name will go.return(0); //I don’t think you need his here, but I always do it..}`

You can just copy and paste this, it should work.
Most other languages are the same.
Make a variable.
Get the name
Print the name.

### Re: Coding: Fleeting Thoughts

Posted: Sun Feb 04, 2018 12:52 pm UTC
Thank you so, so much... Peaceful Whale, you... made my early morning. <3 :)

### Re: Coding: Fleeting Thoughts

Posted: Sun Feb 04, 2018 2:01 pm UTC
Jumping back to the 3D array, one solution that I don't think I saw mentioned, is the possibility of storing 'point' information in a Dictionary (I think that's Python's term, I'm rusty) list with an 'index' of "x y z" equalling the desired-to-store "n". Either create an entry for every possible x, y and z and populate with that x,y,z's n or just store any significant n and assume not-existing/null entries are your default (zero?) value, with dictionary-trimming being an option to re-default a locale (if you're ever changing it) as an alternative to over-writing that default that's harder on processing but may he easier on memory in large-but-sparse 'arrays'.

Writing the Dictionary with indexes beyond single digits require a space (or other, purely arbitrary) delimit so that "1 23 4" doesn't clash with "12 3 4" or "1 2 34". Actually populating the Dictionary could be done by either string-slicing "ABCdefGHI…" into plane-stringss (0..xmax) the plane-strings into row-strings (0..ymax) and then the line-sting into the base units (0..zmax), to then write to the dictionary (or not, if 'default' and you're being sparing in that regard) if not multi-character entries (0/1 would work, indeed, could be packed even better by binaryising the ord of a char, or something similar, at the last lecel if you don't mind it being a bit harder to manually edit or review).

Multichar entries (or 0/1s, but with the option to expand or even deliberately store blanks) requires you to subdelimit (e.g. "AA,B,CCC:d,eee,ff:GGG,,I;…" to plane-unpack on ;s, line unpack on :s, element unpack on commas) but that's an easy enough alternative to pursue if you want to break out from pure binary-cells (e.g. 3D 'sandgame' complexity).

And unless your Conwayesque rules bias overly towards Cell Birth and away from Cell Death, it is highly likely that each iteration will not overwhelm the slightly more inefficient Dictionary storage method (trimming to produce not(exist(cell("x y z")) as a readable proxy for your zeros, or whatever) quite so much as maintaining a [Xmin..Xmax][Ymin..Ymax][Zmin..Zmax] array, and makes it easier (if more dangerous, left unfettered1!) to dynamically extend the Conwayspace 'infinitely' out into the unknown, and/or follow a single 'glider' construct ('frame dragging' its footprint, if it's not rooted in a repeating Glider Gun).

But you say you've implemented it already. This is just my own fleeting thoughts (I hope it makes sense!) that I would have posted much earlier if I'd been keeping up with the thread like I have not been doing. Despite all intentions.

1 I'd suggest that knowledge of the number of cells in use (either ask for the number of elements currently in the Dictionary, or else increment a counter, each time, every time you create or leave created an entry) and use that to avoid approaching a silly-limit number (say, 0.5 Gigaitems, which might be a few Gigabytes of praftical memory?) without killing the simulation, or pausing it with a warning that you are continuing (until 1 Gigaitems, then 2 Gigaitems…), would be a good housekeeping solution to pre-program into this construct. So you can keep a lid on things running riot.

### Re: Coding: Fleeting Thoughts

Posted: Mon Feb 05, 2018 9:18 am UTC
Requiring string manipulation (thus possibly memory allocations), hash calculations and several indirects to get a value will have a performance impact. If your algorithm is lots of accesses and few calculations (like matrix multiplication), that is going to be noticeable.

I don't know much about python internals, but that approach also has a memory overhead of >100 bytes per value, so avoid this unless your array is sufficiently sparse.
Spoiler:
* a key of "10 11 12" is 16 bytes for utf16-text, 50 bytes for a string object
* every bucket is a linked list, so an entry requires 16 bytes for pointers to value, next, 8 bytes for the hash, and 16 bytes for the additional object

The most performant 3d array is a 1d array with manual indexing. It has the best cache locality and the least amount of indirection, but it's a bit error-prone when coding.

Excuse my javascript.

Code: Select all

`let x_size = 10;let y_size = 20;let z_size = 30;let arr = new Array(x_size * y_size * z_size);let z_stride = x_size * y_size;let y_stride = x_size;let x_stride = 1;arr[ x * x_stride + y * y_stride + z * z_stride ] = 42;`

You could wrap that in a class with get(x,y,z) and set(x,y,z,value) methods, assuming python can inline those calls.

On the other hand, unless python has an actual "array of double" instead of "array of pointer to something", those optimizations aren't going to make much of a difference. Just use the built-in stuff.

### Re: Coding: Fleeting Thoughts

Posted: Mon Feb 05, 2018 9:21 am UTC
Yeah, there's a lot of different options for how to store a multidimensional array, and they all have their pros and cons.

Lists-of-lists (as before)

Code: Select all

`# as comprehensioncat = [  [    [      0 for i in range(3)    ] for j in range(5)  ] for k in range(3)]# as a literalcat = [  [    [0, 0, 0],    [0, 0, 0],    [0, 0, 0],    [0, 0, 0],    [0, 0, 0],  ],  ... etc]# element accesscat[0][1][2] = 5print(cat[i][j][k])`
pros: basically the same behaviour as C multi-dim arrays if you're used to that
cons: you have a whole bunch of separate lists scattered all over memory, which isn't the most efficient (but in most cases this isn't bad enough that you'd actually notice, so probably not worth worrying about). Also there's nothing enforcing it to be rectangular - ie it's possible to have lists of lists of different lengths and confuse everything (so you need to be careful when you're manipulating them to make sure this doesn't happen).

Dictionary-over-tuples (as suggested above):

Code: Select all

`# as comprehensioncat = {  (i, j, k): 0  for i in range(3)  for j in range(3)  for k in range(3)}# as a literalcat = {  (0, 0, 0): 0,  (0, 0, 1): 0,  (0, 0, 2): 0,  (0, 1, 0): 0,  ... etc}# element accesscat[0, 1, 2] = 5print(cat[i, j, k])`
pros: a single data structure, so all your data is in one place. The keys are a lot more flexible, you can have negative indexes for example. It's easy to make a "sparse" array where only some keys have values, and others are missing (and then don't take up space).
cons: the literal is super clunky to write, and the element access with the tuple inside the subscript is a little weird, doesn't really look like something you're used to seeing in Python code. Also while the data is all in once place, it's also not the most space-efficient as it needs to store all the keys along with the data, plus all the other overhead stuff Tub mentioned... probably not something you'd notice when it's this small, but for larger ones you'd definitely feel it.

Single-list:

Code: Select all

`# as a generator:cat = [0] * (3 * 5 * 3)# as a literalcat = [  0, 0, 0,  0, 0, 0,  0, 0, 0,  0, 0, 0,  0, 0, 0,  ... etc]# element accesscat[5] = 5print(cat[i*15 + j*3 + k])`
pros: pretty efficient, everything is all in one big list. This is also a pretty common C idiom for dynamic-size arrays (while in C an array-of-arrays isn't too bad to manipulate, a pointer-to-pointers is just awkward to manage).
cons: every time you have to do an element access you need this little formula which just gets tiresome fast, and is an extra opportunity for typos. You can reduce this with some sort of wrapper that does the calculation for you, though.

External library:

Code: Select all

`import numpy# as a generator:cat = numpy.zeros((3, 5, 3), dtype=int)# as a literalcat = numpy.array([  [    [0, 0, 0],    [0, 0, 0],    [0, 0, 0],    [0, 0, 0],    [0, 0, 0],  ],  ... etc])# element accesscat[0][1][2] = 5cat[0, 1, 2] = 5  # either syntax is supportedprint(cat[i][j][k])print(cat[i, j, k])  # either syntax is supported`
pros: libraries can let you do a lot of complicated things quite quickly
cons: libraries can let you do a lot of complicated things without fully understanding what you're getting yourself into, and suddenly your program is a very complicated thing when you weren't expecting it... the library probably does a lot of things that you have no particular interest in doing, since it's not really designed for this (like, you're not going to take two of these and try to somehow do some sort of matrix-multiplication between them or whatnot). Probably not a realistic suggestion for your actual use-case.

### Re: Coding: Fleeting Thoughts

Posted: Mon Feb 05, 2018 10:23 am UTC
(Good examples, Phlip. Better than the pseudocode I'd originally included in my Dict suggestion before reverting to natural language. With my Python being rusty, it was more Perlish, which wouldn't have helped much. )

Tub wrote:I don't know much about python internals, but that approach also has a memory overhead of >100 bytes per value, so avoid this unless your array is sufficiently sparse.
Didn't realise it'd be quite that much. (The suggested 0.5 Gigaitems would be 50Gb of memory? Yeah, definitely lower that initial check limit.)

But in a (flat) Conwayspace, sparsity is the watchword. So if the design of the born/live/die limits in 3D are equivalent then you'd perhaps (depending on the rules) get 20%-or-so populated as a top limit (a block of self-sustaining oscillations just far enough apart to not nudge each other into overcrowding-death), and that'd be an extreme position, at a proposed 20 bytes per cell over a cuboidal volume that would array itself with a fifth or a tenth of the memory. But a 3x3x3 'glider' (going off the 3x3 2D version) zooming off into the void (or as far as the numeric-rendered-as-text type can reliably index an "x y z" without imprecision on one or more dimensional references), it'll still be 2,700 bytes (less those cells that are null/zero within the basic 3x3, plus those that extend into the next-step 3x3 on every traversing step) while drifted off into what an array-defined space would ask (possible Python background efficiencies and cheaty-shortcuts aside) to be stored in an impossibly resized array of whatever dimensionality. A double-glider generator (one sent off in each of two directions) would require 5.4k even while X/Y/Z max and mins diverge alarmingly if one tried to store it as a array box that kept up with the boundaries it expanded out into.

It's one solution, though. I wouldn't define a full Minecraft-like environment as a Dictionary-indexed 'array' with so many cells filled (even if air=null). For the infinite sandbox version, I'd shove it through a check for a stored delta covering a volumetric tile (a convenient 128³ array, or somesuch) that only exists/is loaded to memory whilst there's possibility of interaction, returning the procedurally-produced contents otherwise. And I reckon I could twerk it so that low-change deltas can be stored as Dict-defined changes, but switching to Arr-storage once sufficient upheaval has been made by the players.

Perhaps, in a GoL 'board', smart analysis of the regions with populated (live) elements could store small or medium-sized offset (and, where abutting, tiled) arrays with the efficiency of their storage within a structure held as a dictionary-indexed reference. Further optimisations could include array-equivalence analysis. The existence of a Glider Gun might suggest that the region offset(+100 +100 +100) holds the same info as offset(+200 +200 +200), etc, with maybe the +150s being the same as the +250s in a different part of the movement cycle of other gliders. And -100(³) is identifiably the same, but reflected? Processing overhead, but with a small and ultimately constant number of arrays (each basic step in a glider lifecycle, plus that of the Gun element as it cycles each 'firing' action) referred to by pointers, your basic multidirectional glider spawner hits memory limits only due to the increasing tally of macro elements, long after faithful array storage or cell-level dictioneering would have brought your sim to a crawl-or-crash scenario.

(Remember to ensure that interacting 'boxes', such as a vanguard Glider encountering a 3-cell flip-flop (or equivalent 3D version) placed in waiting now becomes its own new array again while resolved (destroyed, transformed, whatever; any rooted remnants awaiting the next glider, any moving ones likely being either a transformed or differently configured glider/crawler, howeverso that ambush resolves itself). It's an advanced Memory Management art that you, as programmer, could do better than the dumb compiler/interpreter (which cannot be flexible enough to understand fully all the optimisations it could make on your behalf), but is still open-ended, and a challenge to even the human brains who have been studying this stuff, academically, for years. It's a tough problem, but please don't mind my idly pondering aloud, about it!)