An Exploration of A Model
A Refinement of the Idea
The germ of the idea was computational policy; policy, regulations, social norms, rules, principles, that are by default implemented as natural language and encoded as such. These filaments of language make up the bounding shapes of how our society functions, what is possible, what is not, and a means to shape that. These rules are in some cases as hard and fast as the rules of physics, or chemistry, or biology in the way that they can affect our everyday lives. Several papers and highly successful (read Nobel Prize winning) applications of teaching neural networks about the physical sciences have been well documented in the literature. These neural networks are trained to control the configurations of physical systems and then simulations are run, or in some cases the neural network is the simulation TODO - Look at how alpha fold is actually implemented.
Technology Adoption
I think one of the best ways to align tools to the ethics of their users is to make the tool flexible and able to adapt itself to the user. This stands in stark contrast to the user adapting herself to the tool. For tools to be adaptable, the user needs to actively adapt the tool, and for the user to adapt the tool, they need to understand, to a certain degree, how the tools works. This applies in every technology that I can think of, or have thought of. If you have an example of a technology that adapts to the desires of the user, then show it to me, and I will either show you an exploitative marketing device (much like the bulk of social media and web 2.0 tech) or you will have found a truely magical piece of tech that we should model our efforts on here. No I think for a tool to be adapted, we have to have some measure of knowledge of how it works.
This was one of the ideas at the heart of the hacker ethos described in:
Williams, Sam, and Richard Stallman. Free as in Freedom 2.0: Richard Stallman and the Free Software Revolution. Free Software Foundation, 2010.
In order to optimally use tools, we need to know how they work. If we don’t we aren’t using the tool, it is shaping and using us.
Therefore, one of the ways to improve alignment is to increase adoption. If more people know how these models work, what they can and cannot do, and how to change them to suit the needs of the individual, the collective whole of the models cant help but become more aligned with humanity’s goals and ethics.
To date though, the position on AI adoption have largely been bifurcated into extremely optimistic early adopters and tinkerers, and AI skeptics that have extensive lists of the (real and imagined) risks, and the shortcomings of the models. There have been people treading the middle ground of pragmatic application, cautious attempts to understand scaling, and alignment research to understand this middle landscape. I am going to make a value statement, “We need more people in the middle ground exploring what these models can do, and how they behave in many, many if not all aspects of society. If we have people.
TODO - explore individual adoption of their own tools vs adoption of commercial tools
look at MESA for models in python
look at simple agentpy models
Look at netlogo
This is the exploration of the types of models I want to explore in this project. First a warning, the word model is going to get overloaded quickly in this project and I need a way to distinguish between the Large Language Model (or a generic AI Model for that matter) and the physics and mathematics Model that we will be interfacing the LLMs with. I think I will use the term “Language Model” to mean the AI or LLM generating the policy and simulation conditions, and the term “World Model” to mean the physics, math, social, agent, or complex system model that we will be trying to learn from.