Xeio wrote:Todays Google doodle is nice.
They appear to measure “shortest solution” in terms of fewest instructions, not least movement.
Moderators: phlip, Moderators General, Prelates
Xeio wrote:Todays Google doodle is nice.
Qaanol wrote:Xeio wrote:Todays Google doodle is nice.
They appear to measure “shortest solution” in terms of fewest instructions, not least movement.
Code: Select all
factorial = product . enumFromTo 1
isPrime n = factorial (n - 1) `mod` n == n - 1
hotaru wrote:Qaanol wrote:Xeio wrote:Todays Google doodle is nice.Spoiler:
Xanthir wrote:You might be interested in looking into Streams, like RxJS and the like. The concepts are very similar, and they've thought about many of these problems already.
Yakk wrote:I've poked at them, but documentation tends to focus on *using* them, not how they where written and how they work.
Tub wrote:Yakk wrote:I've poked at them, but documentation tends to focus on *using* them, not how they where written and how they work.
Well, they are open source...
I'm not entriely sure I read your notation right, but you seem to try mixing the two fundamental control principles of streams: push and pull.
In push-based systems, the Sources will generate new data whenever they feel like it, and call into their sources with the new data. This is useful in complex event processing, e.g. when data arrives in near real-time from sensors.
In pull-based systems, the call stack is the other way around. You call into your sources, requesting a piece of data, which in turn request data from their sources etc. This is commonly called an operator graph in database systems. Because your sink will only request as much data as it needs, this allows early aborting of queries suffixed with 'LIMIT n'. It's also used anywhere you get an Iterator.
Pull-based systems require all data to be available when requested, e.g. in memory or on a local disk. You can kinda get around that in a language with async/await or something, but that'll only turn your source<T> into source<Promise<T>>, and then you're usually better off with a push-based system.
In push-based systems, no pipe or sink needs to know about its sources. In a pull-based system, no pipe or source needs to know about its sinks. Your definitions of source and sink seem to imply a push-based system, but then your pipes get a reference to both a source and a sink, and that seems weird.
You can do everything functional, but IMHO it's easier to model as a (acyclic, connected, directed) graph of nodes, each being an object with a standardized API, like push<T> or observe<callback<T>> (for push-based) or get<T> or getIterator<T> (for pull-based). Passing the connections in a type-safe way in the constructor just seems cleaner and more robust than mixing connections and data in an arbitrary argument list.
Yakk wrote:Tub wrote:Well, they are open source...
And so is Linux. That doesn't mean it is reasonable to learn how Linux kernel overall architecture design by reading the source.
Yakk wrote:Source, Sink and Pipe are 3 kinds of graphs. They can be composed in certain ways. If you connect a source to a pipe, you get a source of the pipe's output. If you connect a pipe to a sink, you get a sink of the pipe's input. If you connect a pipe to a pipe, you get a pipe.
Yakk wrote:What I'm missing is a clean way to handle multiple inputs/outputs.
Xeio wrote:No loop, just a "where in" clause which apparently caused the actual read rows to grow exponentially as the number of items in the list grew.
Tub wrote:WHERE IN (1, 2, 3, 4) or WHERE IN (SELECT ...) ?
The first shouldn't be much of a problem for short lists, but you'd need to figure out if longer lists get properly indexed so they can be joined. If the optimizer translates that into (WHERE a == 1 OR a == 2 OR a == 3 OR a == 4), that'd be bad. Temporary tables (or, apparently, IQueryable) can help.
The latter is a dependant subquery, and using those means taunting the optimizer. Make sure the query optimizer actually converts the dependant subquery into a proper join. Better yet, rewrite the query to be an actual join. Dependant subqueries aren't always optimized well; if the subquery is executed once per row then you'll see what you're seeing.
Code: Select all
SELECT [p].[Id], [p].[ApiKeyId], [p].[ExpiresIn], [p].[ItemID], [p].[Marks], [p].[Time], [p.Item].[ID], [p.Item].[IsExtraordinary], [p.Item].[ItemCategory], [p.Item].[Name], [p.Item].[Rarity]
FROM (
select * from Prices where Id in (select max(Id) from Prices group by ItemId)
) AS [p]
INNER JOIN [Items] AS [p.Item] ON [p].[ItemID] = [p.Item].[ID]
WHERE [p].[ItemID] IN (1871, 1898, 1908, 2063, 1909, 1918, 1915, 2056, 1893, 1913, 1892, 1873, 1904, 2066, 2062, 1919, 1911, 2058, 1891, 1906, 1914, 1899, 1901, 1905, 1916, 1896, 2060, 1894, 2057, 1870, 1920, 2064, 1903, 1900, 1872, 1895, 1912, 1907, 2065, 2059, 1897, 1917, 1973)
ORDER BY [p.Item].[Rarity] DESC, [p.Item].[Name]
Code: Select all
SELECT [p].[Id], [p].[ApiKeyId], [p].[ExpiresIn], [p].[ItemID], [p].[Marks], [p].[Time], [p.Item].[ID], [p.Item].[IsExtraordinary], [p.Item].[ItemCategory], [p.Item].[Name], [p.Item].[Rarity]
FROM (
select * from Prices where Id in (select max(Id) from Prices group by ItemId)
) AS [p]
INNER JOIN [Items] AS [p.Item] ON [p].[ItemID] = [p.Item].[ID]
WHERE [p].[ItemID] IN (
SELECT TOP(40) [i].[ID]
FROM [Items] AS [i]
WHERE (CHARINDEX(@__name_1, [i].[Name]) > 0) OR (@__name_1 = N'')
ORDER BY [i].[Name]
)
ORDER BY [p.Item].[Rarity] DESC, [p.Item].[Name]
Code: Select all
U->T* + U->V* = U->(T or V)*
U->T + U->V = U->(T and V)
U->T* + U->V* = U->(T* and V*)
U->T* + U->V* = U->((T and V)* and (T* or U*))
A->B + U->T = (A or U)->(B or T)
A->B + U->T = (A and U)->(B and T)
Xeio wrote:Newer Faster Query:
Code: Select all
SELECT TOP(40) [i].[ID]
FROM [Items] AS [i]
WHERE (CHARINDEX(@__name_1, [i].[Name]) > 0) OR (@__name_1 = N'')
ORDER BY [i].[Name]
Initially I had assumed that would have to be the slowest part of the query but I ran into some other performance problems (including the above) that dwarfed any issue the subquery may have caused.Thesh wrote:↶If that is causing problems, the Prices subquery is a good candidate for an indexed view.
Thesh wrote:But this made me throw up in my mouth:Code: Select all
SELECT TOP(40) [i].[ID]
FROM [Items] AS [i]
WHERE (CHARINDEX(@__name_1, [i].[Name]) > 0) OR (@__name_1 = N'')
ORDER BY [i].[Name]
Code: Select all
var items = _marketContext.Items
.Where(i => i.Name.Contains(name))
.OrderBy(i => i.Name)
.Take(60)
.Select(i => i.ID);
var prices = await _marketContext.Prices
.FromSql("select * from Prices where Id in (select max(Id) from Prices group by ItemId)")
.Include(p => p.Item)
.OrderByDescending(p => p.Item.Rarity)
.ThenBy(p => p.Item.Name)
.Where(p => items.Contains(p.ItemID))
.AsNoTracking()
.ToListAsync();
Code: Select all
int blueprint[3][5][3];
blueprint[][][] = //stuff
Code: Select all
blueprint = [None]*3
for x in range(3):
blueprint[x] = [None]*5
for y in range(5):
blueprint[x][y] = [0]*3
# or
blueprint = [[[0]*3 for _ in range(5)] for _ in range(3)]
Code: Select all
blueprint[3][1][1] = “1”
blueprint[3][2][1] = “1”
//etc.
Code: Select all
blueprint = [ [somethingSomething for _ in range(5)] for _ in range(3)]
Code: Select all
myPrint = [ [ [ value+1 for value in row ] for row in matrix ] for matrix in blueprint ] #add 1 to every single element
# or if you need their indices
myPrint = [ [ [ i+j+k for (i,value) in enumerate(row) ] for (j,row) in enumerate(matrix) ] for (k,matrix) in enumerate(blueprint) ] #make every single element the sum of its indices
# well, by now the lines gets so long that you either want to split it into multiple lines with indentation
myPrint = [ [ [
1 if i == j else value #set the diagonals of all matrices to 1
for (i,value) in enumerate(row) ]
for (j,row) in enumerate(matrix) ]
for (k,matrix) in enumerate(blueprint) ]
# or use classical for loops after all (which does require you to specify the lists yourself
myPrint = []
for (k,matrix) in enumerate(blueprint):
myPrint[k] = []
for (j,row) in enumerate(matrix):
myPrint[k][j] = []
for (i,value) in enumerate(row):
myPrint[k][j][i] = i+j+k #make every single element the sum of its indices
Code: Select all
# something something blueprint
someVariable = blueprint
blueprint = [ [ [ ... ] ... ] ... ]
Peaceful Whale wrote:And if my list changes? Do I have to fill it in row by row?
Code: Select all
int cat[5][2][3] = {
//stuff goes here
};
Code: Select all
cat = [
[[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0]],
[[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0]],
[[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0]],
]
Code: Select all
cat = [
[[0,0,0],
[0,0,0],
[0,0,0],
[0,0,0],
[0,0,0]],
[[0,0,0],
[0,0,0],
[0,0,0],
[0,0,0],
[0,0,0]],
[[0,0,0],
[0,0,0],
[0,0,0],
[0,0,0],
[0,0,0]],
]
Peaceful Whale wrote:↶
Thanks... I know python is really weird when it comes to white space...
Code: Select all
enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};
void ┻━┻︵╰(ಠ_ಠ ⚠) {exit((int)⚠);}
Amy Lee wrote:Just what we all need... more lies about a world that never was and never will be.
Azula to Long Feng wrote:Don't flatter yourself, you were never even a player.
Ginger wrote:Just posting here to say: I made a really very simplify program only once. I programmed it to say, 'Hi [whatever name you type into a dialogue box].' And it was so enchanting and charming to see, 'Hi Ginger,' on my screen every time I use that program yet... I dunno how to program/code anymore? </3
Code: Select all
#include <stdio.h>
int main()
{
//get a variable for the name
int name[20];
printf("Please enter your name\n"); // slash n makes a new line
scanf("%s",&name); //get the name and set it to the name variable.
printf("hello %s",name); // %c is a variable placeholder. That is where name will go.
return(0); //I don’t think you need his here, but I always do it..
}
Amy Lee wrote:Just what we all need... more lies about a world that never was and never will be.
Azula to Long Feng wrote:Don't flatter yourself, you were never even a player.
Code: Select all
let x_size = 10;
let y_size = 20;
let z_size = 30;
let arr = new Array(x_size * y_size * z_size);
let z_stride = x_size * y_size;
let y_stride = x_size;
let x_stride = 1;
arr[ x * x_stride + y * y_stride + z * z_stride ] = 42;
Code: Select all
# as comprehension
cat = [
[
[
0 for i in range(3)
] for j in range(5)
] for k in range(3)
]
# as a literal
cat = [
[
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
],
... etc
]
# element access
cat[0][1][2] = 5
print(cat[i][j][k])
Code: Select all
# as comprehension
cat = {
(i, j, k): 0
for i in range(3)
for j in range(3)
for k in range(3)
}
# as a literal
cat = {
(0, 0, 0): 0,
(0, 0, 1): 0,
(0, 0, 2): 0,
(0, 1, 0): 0,
... etc
}
# element access
cat[0, 1, 2] = 5
print(cat[i, j, k])
Code: Select all
# as a generator:
cat = [0] * (3 * 5 * 3)
# as a literal
cat = [
0, 0, 0,
0, 0, 0,
0, 0, 0,
0, 0, 0,
0, 0, 0,
... etc
]
# element access
cat[5] = 5
print(cat[i*15 + j*3 + k])
Code: Select all
import numpy
# as a generator:
cat = numpy.zeros((3, 5, 3), dtype=int)
# as a literal
cat = numpy.array([
[
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
],
... etc
])
# element access
cat[0][1][2] = 5
cat[0, 1, 2] = 5 # either syntax is supported
print(cat[i][j][k])
print(cat[i, j, k]) # either syntax is supported
Code: Select all
enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};
void ┻━┻︵╰(ಠ_ಠ ⚠) {exit((int)⚠);}
Didn't realise it'd be quite that much. (The suggested 0.5 Gigaitems would be 50Gb of memory? Yeah, definitely lower that initial check limit.)Tub wrote:I don't know much about python internals, but that approach also has a memory overhead of >100 bytes per value, so avoid this unless your array is sufficiently sparse.
Users browsing this forum: No registered users and 13 guests