So I've been reading Cockshott quite a bit, and due to a recent comment thread I wrote a bit of a huge essay on why I think it could work but it should be limited to physical goods and a market should still exist though of course only for worker owned ventures, specifically for services as well as for low-volume new products. Anyways, here it goes:

So I will first be making the argument that central planning is not good for dealing with services, because their inputs and outputs are not easily measurable, nor are they easy to predict. Here the LKP shows itself - revealing the information to optimize sometimes can't even be done, not just because of motivation but also for social reasons. For example, making public the fact that person X came to see psychiatrist Z at time Y is counterproductive.

There is also the issue of introducing new commodities and deciding how to decide which commodities to introduce. Why exactly this is an issue will be elaborated upon further.

A requirement for cybernetic planning is the decision of a cost function. Cockshott gives this idea of a cost function SNLT. This is a good function, but it doesn't account for shortage, Cockshott thus proposes biasing towards the production of goods where, once sold on a consumer market - say for labour tokens - demand outstrips supply.

This is actually equivalent to having this as a cost function, with m as the commodity vector and djdc as the derivative vector :

float j = 0;  
float[] djdc = new float[m.size()];  
int i = 0  
 for (Commodity c : m){  
    j += c.endpointPrice()/c.averageSNLT();  
   djdc[i] = -Math.pow((c.endPointCost()/((c.averageSNLT - c.marginalSNLT())*(1/c.surplus()), 2); //The derivative of the cost function will be snlt/(snlt)'^2 as 1/x^2 is the derivative of c * 1/x and according to the chain rule  
    i++;
}

Indeed, minimizing this cost function will give people whatever they want most, and the ratio of $/SNLT should eventually converge towards 1 asssuming that one """dollar""" is one average abstract labour hour.

The issue, is that this cost function only works well for consumer goods, not capital goods. This is because introducing price signals to goods that are also used as input is going to screw the fuck with the differentiability of the cost function, which isn't the good if you want to use an algorithm like the Harmony algorithm that relies on the gradient. So, as much as you want, you don't want capital goods to be assigned a price.

This is an issue for innovation, because R&D requires capital goods, so you will need another way of doing R&D. Plus, if you are going to not have a tax, the R&D dollars need to come from somewhere or you have a varying price value ratiowhich also fucks with everything.

So here is the solution - have the R&D department as well as the social services department run as a worker-coop. Add the R&D department in the I/O matrix as an industry with output being R&D units, and inputs being whatever you need, then an add an optimization constraint that x amount of R&D units must be produced. Then, have the now unbound inputs sold to those that make the request in direct labour tokens, and allow them to use these to produce goods that are not produced by the central planning system, which will then be sold for a net profit until a certain amount of surplus has been made for the co-op owners, after which it gets added to the central planning system if it is a physical good.

And for services that are not fit for central planning, let them also operate as a worker-owned co-op.

I think this achieves essentially all we could want, and avoids the issues of complete central planning for R&D as well as services.

Of course, the cost function wouldn't be so simple, we also want to account for things like resources, and so on, but you catch the drift.

Another nice way in which the failures of the USSR got fixed by technology, is that telecommunications and cryptography makes it possible to align the incentives of an industry consuming commodity x and the industry producing commodity z because if industry z were to underproduce, or if industry x were to overconsume, it would be immediately evident to their I/Os due to them not following the plan or losing commodities with a given crypto key, and the incentive of their producers/consumers would be to call them out because it means less work for them :)

Also, I like to say that Hayek and Mises get obsoleted at the pace of en/n3 :)

Something I forgot to mention, Cockshott forgets that commodities are spatially discriminated. That is, a commodity produced in Volvograd and a commodity produced in Nairobi are not equivalent - they must be transported. Therefore, it would make sense to split each planning region into a few sub-regions where the transport costs are not too divergent, and then create IMPORT and EXPORT industries whose offload commodities in exchange for some other ratio of commodities from another region, and then the Harmony algorithm or some other metaheuristic can try to optimize it post-facto and update the average costs with transportation included.

  • the_river_cass [she/her]
    ·
    4 years ago

    my impression is that it's much easier to build a planning system that tries to preserve certain invariants via semi-autonomous agents that are frequently updated by a central authority to tune their behavior rapidly and automatically for present conditions. this preserves local reasoning about the system (relatively simple, autonomous agents) while allowing for whole system optimization. this is the model followed in most of the tech industry right now for everything from resource allocation to the actual delivery infrastructure of the internet. I'll see if I can dig up some papers - it's tough as they're mostly focused on the operations engineering challenges rather than the actual optimization problem at its heart.

      • the_river_cass [she/her]
        ·
        edit-2
        4 years ago

        I'm actually thinking about this the way large scale computer infrastructure is organized. so the agents would have autonomy within their grants (you can't exceed your resource limits, you can't access resources you haven't been granted permission to, but you can spike your resource usage above your projections, if needed and below your limits) and the central authority can focus on planning resource allocations such that none of the agents are starved of resources. much of this happens automatically in large scale systems, especially when you're talking about Google, Amazon, and the like, where even hard decisions that would normally seem to require human intervention are routinely conducted automatically then reviewed by people after the fact. it's a model that has scaled to an industry that, at least in dollar values, equals the GDP of many smaller nations, so I suspect the architecture has validity. the other side of the coin is how you manage change in a system of that complexity and size and that's a topic that can and does span entire books.

        the primary advantage of this model is that you can focus resource planning on smaller and smaller regions and aggregate those smaller regions into larger ones via quality of service guarantees. this composability is critical to maintaining reasonability over the system has a whole.