Frequently Asked Questions

We are presently writing a FAQ about research project Information in the Dark. For now, here is a partial first draft of it.

Why consider a model as anything else than an approximation of something?

Because an alternative definition might let us do all the things we can do with this one, plus some others. And because the assumptions it implies might be weaker.

All those definitions define a model relatively to "something" that the model is not. But they don't all make the "something" matter in the same way.

The notion of approximation of Definition naturally suggests a notion of 'better model' . In itself it suggests comparing models. If we want to compare two models $M_1$ and $M_2$ that are the approximations of $S_1$ and $S_2$ respectively, then we have to: (i) consider how $M_1$ relates to $S_1$, (ii) how $M_2$ relates to $S_2$, and (iii) compare the $M_1$/$S_1$ approximation relation to the $M_2$/$S_2$ approximation relation. All this gives a lot of importance to the "somethings" $S_1$ and $S_2$ and to their relations with the models. The problem is that unless both the "somethings" and the relations are formalised objects, there is no way to be sure of what we really are talking about as soon as we start comparing models. And if not to compare models, it is not clear how the notion of approximation in Definition may actually be of service.

On the other hand there is no naturally self-imposing notion of 'better perspective'. So in Definition there is no embedded notion suggesting how models should be improved and compared. Consequently, every time we want to compare models, we have to choose and explicit the ad hoc relation between models that we are going to use. Further, with this definition, incentive for comparing models only naturally exists when the models are perspectives on the same "something". But in this case, the "somethings" matter only in that they determine which models are worth comparing and which are not. They don't matter in themselves . Generally, as long as we are not comparing nor changing models, as long as we are just abiding by them and studying them, we need not define, let alone mention the "somethings" nor anything else than the models. And to stop abiding by a model, what we need is a motivation that dictates a particular change of model. Whatever this motivation is, in the end, scientifically, we don't use the assumption that the motivation existed before we got motivated by it, nor that it relates to the motivation that motivated a different change of representations made somewhere at some point by other scientists. And it is worth choosing a definition that implies less(er) assumptions.

Is project Information in the dark trying to get at what parameters influence the way we think about a model?

No. The project is based on the following stance: a model is a way of thinking (about a particular object, of interest to science). Or at least, it is the representation of a way of thinking, and as such, it instigates that particular way of thinking.

Is the project trying to get at what parameters influence a model?

Identifying the parameters that influence the modelbuilding or improving the modelidentifying the parameters of the modeldefining the model. This task is one that requires staying faithful to the perspective that the model represents. So the answer is no.

What is the project trying to get at?

Where there is space for influence and what we can do with it. Equivalently: what does a model not exclude in itself.

Among the parameters determining what perspective we take on something, there might be the state of our body, the colour of the room we're in, the history of our university, a decision made a long time ago by a powerful emperor … Why focus only on some parameters such as the level of abstraction and the time setting?

In addition to the possible endlessness of the list, there is this: what parameters we'll think of first might strongly depend on the (scientific) context. Plus, some parameters might represent a wide variety of different things depending on the context. The level of abstraction is an example: there might be many different relevant ways of defining it. In capacity of general nondescript parameter, there is no more reason to focus on this particular parameter than on any other. Abstraction like time flow is of interest to the project only in capacity of formal(isable) relation between instances of interaction systems. The reason to focus on it is that an understanding of such relations provides us with tools to compare models, call them into question and consider alternative versions of them that might suggest researching information in different places or in different ways.

More on this …

So the project is not looking for the reasons of the perspectives we take but for their possible outcomes?

Yes, their possible outcomes that are expressible in the terms of the model at hand. That is to say, more precisely: the extent to which models can differ for other reasons than the properties of their underlying formalism and the properties of the objects they are supposed to represent. — That's the idea of the grounding, fundamental-research part of the project. The rest of the project relies on this research but it is rather about generating changes of perspectives. The rest of the project relies on this research but it rather is about generating changes of perspectives in practice.

Why not go for the explanations? Why just go for the effects?

(1) First of all, my deep conviction is that science is not about explaining things, it's about formalising grounds on which humans can meet and build together. And indeed, the "black box approach" is not specific to this project. I believe the whole of science works that way: it defines objects/concepts/phenomenons and uses them. Scientific definitions are not made to capture the ℰssence of things once and for all so that humans can all experience the world in a same correct manner. Scientific definitions are made to capture effects that we experience or witness, into a form that can be shared and transferred from one human to another for the sake of making those effects useful to us.

(2) Perhaps it makes sense to focus on explaining perspectives if we want to avoid having them altogether. But if we want to use them, then it makes greater sense to start investigating how they impact.

(3) Completing the knowledge we already have in a certain scientific context with more knowledge relative to the perspective we take on this knowledge is something we can do. But it is not clear how this can help make a perspective obsolete and switch to a new one. Ultimately, the project aims at generating controlled changes of perspectives in favour of the perspectives that can presently be more useful to us. The most direct route to that end involves understanding how a perspective on something interferes with the information we draw out of it.

(4) Explaining a perspective on an object is likely to require manipulating knowledge and applying competences that are specific to the domain from which the object is defined and considered — i.e. working under the light of the present state of the art. As a computer scientist I am not equipped with the competences of a specialist to carry out this task, I don't have access to the level of detail it requires. But as a computer scientist, there is something else I can do:

(5) Taming the effects entails carrying out fundamental computer science investigations of formal objects. And investigating formal objects can be much easier and more reliable than investigating something lacking a fully explicit definition. What is more, it is always possible.

Are you interested in the process of subjectivisation?

No.

Your research is very theoretical. Do you already have an idea how to start extending to other scientific domains?

Yes. That's what the "transdisciplinary platform" is for.

The transdisciplinary platform is a tool the project will develop to experiment with a "system".

The experimenting will be done via (1) the inverse Turing test proposed as part of an online application, and (2) series of transdisciplinary workshops addressed to ad hoc work-groups of researchers.

The system is a set of practical techniques (questions to ask, exercises to propose, sketches to inspire) designed to circumscribe a researcher's train of thought, and parse it into an explicit coherent formal system of statements and relations that exhaustively expresses the logics of the train of thought so that they can be manipulated without the specifics having to be dealt with.

Thus the transdisciplinary platform is meant to materialise an interface allowing theoretical and experimental results to be carefully applied to one another. It will be used to collect feedback, discuss hypotheses and determine the extent to which specific ideas are dependent on their original domain-specific formulation, and the extent to which the exploitation of these ideas are characteristic of and limited to the scientific context they emerge from.

Say you have model $M_1$ representing a certain perspective on an object $O$, and model $M_2$ representing a different perspective on the same object. Do you want to build a model $M_3$ representing a perspective from which both $M_1$ and $M_2$ make sense?

No. I don't even know if such a perspective exists. Changing to a new perspective is enough.

Are you proposing to build a universal model?

No.

Are you proposing to build a generalised model $M_3$ that accounts for what both the models $M_1$ and $M_2$ account for, and in addition to that accounts for the difference between $M_1$ and $M_2$?

No. Generalising models means prolonging the life span of a perspective we are taking on something and of the knowledge we have of it. What the project proposes is to update knowledge by obsoleting the models underlying it.

Are you proposing to build better models?

No. The theoretical research on dark information can suggest new models and ways to take greater advantage of existing models. But the project makes a point at not being "judgemental" about models nor making or inspiring comparisons between models other than comparisons based on explicitly defined criteria.

The value of a model is also in what it excludes. If we start completing models with the specification of what makes them limited to a certain perspective, then aren't we going to lose the benefit of that?

Completing models with the specification of what makes them limited to a certain perspective is not what the project proposes to do — especially since the project essentially relies on the idea that a model is a specification of what makes the limits of a perspective. Generally, the project is not about elaborating a new nature of models, but rather about deriving a method and some tools to manage efficiently, rigorously, flexibly and meaningfully the kind of models we already have. It does however aim at adding a "level of exclusion". As demonstrated in [PN], formal systems that are significantly different can compatibly represent the same object. This should make any model-user wonder: Can anything be represented by anything? + What does this question even mean? Theoretical computer science research on dark information is expected to clarify the issue. Provided a model and provided the explicitation of the purpose it is meant to serve (eg explain a certain phenomenon), the research will circumscribe the range of formal systems that have the capacity of acting as alternative models serving the same purpose. It will characterise the properties that the formal systems need to have and exclude those that are incompatible.

Can we deliberately make a model obsolete without deliberately trying to refute it?

Let's agree on the following definitions:

which in the context of science means outmoded by something else: a model is obsolete because it is outperformed by another model or because it is eclipsed by a new paradigm.

On those grounds, the answer is yes [CN].

Further, trying to refute a model can get in the way of making a model work for its own obsolescence. Refuting a model generally means abandoning it for a different reason than the one that that made us build it in the first place. On the contrary, to make a model work for its own obsolescence requires digging out from the model and valuing all the sense the model conveys in order to tame it, exhaust it and surpass it.

What options do we have assuming we don't want to stay stuck on the same model forever?

(1) We can look for data that refutes the conclusions we draw out of the model and/or the information we originally put in it.

(2) We can invent a whole new model and arbitrarily decide to abandon the current one.

(3) We can tweak the present model to get an improved version of it provided we already are equipped with means to compare models and tell what is an improvement.

(4) We can identify what possibilities are not excluded by the model and investigate their relevance.

The downside of technique (1) is that if the model is already up-to-date with the data, then the model in itself won't be of any help: it can't point towards new data that will make us move on. Technique (2) and (3) don't use the model at all: no motivation nor guideline for the change of model is provided by the model itself. The motivation comes from outside of the model. The project proposes to domesticate technique (4).

Why doesn't tweaking a model boil down to tweaking a formal system? Isn't a model a formal system?

We are free to define the terms "model" and "formal system" as we please. However there is a benefit in defining them in a way that distinguishes the two terms.

In either of these senses, a model that models, as in represents or informs on something else is not a formal system, or at least not just a formal system.

Take this:

In itself it doesn't explain anything, nor does it represent or inform on anything that it isn't. It just is, and we call it a square. Similarly, this:

$\frac{P\longrightarrow Q ~~~~~~P}{Q}$

doesn't say anything about anything. It's just a series of symbols that we call "Modus Ponens". It starts making sense when we start making use of it. Its sense in in how it works. The point is that there are a lot of things that a formal system (its defining objects, symbols, rules, properties, statements) doesn't say. And when we use a formal system as a model we risk making it say much more than it actually can and thereby serve us much less than it possibly can. In short, not confusing a model with the underlying formal system is like not confusing the content of a black box with the black box itself, and thereby not confusing a limit on our knowledge and understanding of the contents of a black box with a limit on the use we can make of it.

An example of the way the difference between 'model' and 'formal system' matters.

The answer to the first part of the question follows directly from the discussion above. If we tweak the formal system we have to find out how this impacts on the model, and we have to make sure that we indeed still have a model that models, and what is more, a model that models what we want it to model and how we think it does.

An example of how tweaking a formal system might undermine the modelling.

Using one object as a representation of another usually entails relying on a (large) number of (very significant) assumptions about how this is possible. If we need to assume something then it means that we don't know that this something is true. Otherwise we do know that it is true and we still need to graft it to the model as an assumption — an assumption that requires tweaking the underlying formal system. This means the formal system we settled on is not expressive enough on its own to formalise every true fact that matters to this modelling. In other terms, we made a poor choice of formal system from the start with respect to the modelling we had in mind. In any case, tweaking a model is a delicate task because a model does not boil down to a set of representations of things we know. The things we don't know are as important in the definition and usage of a model. This is the huge fundamental difference there is between models and formal systems.

Is the aim of the project to shift attention towards making models obsolete rather than more realistic, more accurate or more complete?

Yes, but not just that. The aim of the project is to develop a systematic and generic method to make models obsolete. The point of a systematic method is to ensure that the models we use, and the information we draw out of them are relevant, reliable and updated, and that we have a certain amount of control over them and also, importantly, that we make a sound usage of them. The benefit of a generic method is that it necessarily is based on investigations that can be carried out whatever the semantic specifics of the model, ie investigations of fundamental and necessarily well-formalised objects. It entails identifying precisely which are the objects involved in a specific modelling that can be well-formalised, and which are those that cannot. In this sense, having a systematic and generic method to make obsolete our own models, even the most satisfactory ones, favours the reliability of the information we are drawing out of them. And with the flexibility it gives us regarding our own models, comes an assurance of progress one way or the other.

How is the project's scientific approach significantly different from traditional scientific approaches?

Traditionally, we use our present scientific answers as guides for our thinking and let them determine what scientific questions we will next address. The project's approach ensures that we have first considered all of the questions we already presently have answers to. Some of those questions might call for alternative answers in between our actual models. With this approach, we get to explore and explicit what we don't know as opposed to ignoring it so that we can refine, confirm or prolong what we do know.

A large part of science consists in exploiting one system's fortunate ability to model a different system. This 'fortunate ability' is often a sufficient motivation for us to try and build on a model. And usually, when we no longer can do that, we simply abandon the model. Schematically, what the project proposes to do before we start doing any of this is to investigate this 'fortunate ability'.

How can the model on its own tell us how to make itself obsolete?

Let us consider the following definitions:

None of those definitions of "model" has an ingrained notion of 'better model' that we can work with in order to move from current models towards new ones. Nonetheless, even in the dark, there are things we can do:

Definition implies that a model doesn't say everything about the "something" it is meant to be about. We are necessarily blind to whatever we don't see from the perspective the model represents. However (cf Definition) there is no reason we should also be blind to whatever the model's underlying formal system doesn't say. Research on dark information consists in investigating ways to exploit this. More precisely, the project aims at characterising the extent and the nature of the differences there can be between different although compatible representations of the same "something". [PN] and [CN] give examples. In short, the idea is to "exhaust" the information conveyed stricto sensu by the model so as to see more clearly through it the information that it doesn't convey and make sure we don't get mislead or bogged down by information that looks like it could be conveying. Further, Definition emphasises the fact that causality is not logical implication. [CN] discusses and illustrates how this fact can be exploited too. The baseline idea is that with Definition, a natural way of obsoleting a model consists in obsoleting the causality it conveys ie replacing it by a different causality that explains the same effects with different, finer or more adequate "causing mechanisms".

"(How) Are perspectives involved in the natural sciences?": Isn't this an epistemological question rather than a scientific one?

Instinctively, I would say that it depends on what exactly we mean by 'perspective', and whether (i) we are asking the question in order to find, test, or confirm a definition to coin and to explore the concept of perspective, or whether (i) we already have settled on a possibly arbitrary definition and want to make it thenceforth serve the exploration of different concepts. But I'm not in a position to speak for what epistemology may regard as questions of its own. As far as science is concerned, the question is of interest if it is asked like this: Is there something we might agree to call "perspectives" that might have some influence on the output of natural scientists? Or to narrow things down: Does the information that natural scientists draw out of their formal models depend on what we might agree to call (their) "perspectives", in other terms, on something that isn't formalised in the models? Equivalently: Can two different models be simultaneously consistent with one another and yielding contradictory information about something? In other terms, given two models $M_1$ and $M_2$ and a thing $T$: Can it be that nothing in $M_1$'s definition excludes $M_2$ as an alternative formal representation of $T$, but still, the properties $T$ has according to $M_1$ straight-out contradict the properties $T$ has according to $M_2$? A simple example [PN] using Boolean Automata Networks is enough to show that the answer is yes. Notably, neither $M_1$ nor $M_2$ is $T$. $M_1$ and $M_2$ are formal systems. Their properties are not the properties of $T$. What is more, they are two different formal systems so in theory, they have no reasons to have the same properties — none of them formalises a reason to look a bit like the other. Still, we expect $M_1$ and $M_2$ to look at least a bit alike one another since they are both supposed to formalise the same thing. Now, there is a lot of magic to it but indeed: the formal tends to inform on the formalised. So if the formalised, namely $T$, makes any sense at all, there must be a reason for $M_1$ and $M_2$ to look at least a bit alike one another. Indeed, to every formalisable effect, there is a formalisable reason. To formalise effects that are involved in the objects of current scientific investigations, and to formalise reasons for them is undoubtedly a matter of scientific research.

How does (i) the notion of perspective relate to (ii) the formalism vs semantics opposition (symbol/form vs meaning/sense) and to (iii) a discipline like Fundamental Computer Science?

The relation is in the way (iii) can put (ii) to use to change (i).

Schematically:

perspective $\equiv$ model $\equiv$ formal system $+$ semantics.

To favour a change of perspective, we can challenge the model from the inside by investigating how the symbols it is built with are assigned meaning. The symbols act as black boxes. They are meaningless in themselves. But we human scientists fill them up — with things that are not a symbols, of which the symbols are the only trace. Computer science cannot claim to have any grips on the contents of the black boxes. However, it has a fine understanding of the mechanics of systems of black boxes. So it can do things like check that there are enough black boxes in the system, and check that the mechanics of the system are sound. The case where there aren't enough black boxes corresponds to the case where we are making a model say more than it can. In this case we have to decide whether we want to augment the model or whether we prefer to stand by its original design, which means standing by what it says as much as by what it doesn't. The first choice leads to the mandatory review/redefinition of the augmented system's mechanics. The requirement of soundness might entail rethinking the whole design of the model. The second choice leads to rejecting a piece of meaning, ie clearing some space and possibly — if that piece of meaning was actually being made effective use of in the modelling — exploring what alternative pieces of meaning serving the same purpose could this one have been eclipsing. In any case, assuming models shed light on the objects of scientific interest — or that they represent the way our scientific understanding so far sheds light on them, by paying scientific attention to the information process, we may get to shed light in a different way.

Are you proposing to formalise, describe, or codify human umwelt?

No, that would be looking upstream, while Project Information in the dark is looking downstream, advocating for a very attentive administration of the world of formalisable things so that it doesn't infringe on, nor eclipse the benefit of whatever doesn't belong to it.

How can your research be summed up in 3 sentences?

My research consists in developing a generic method to find information where we don't usually look for it, on the grounds of what we don't know. For now, it focuses on information concerning anything that might relevantly be considered as an "interaction system" so that it can be represented in the formalism of Boolean Automata Networks. It exploits the difference there is between a model and its underlying formal system, to uncover information about interaction systems (properties of the systems, explanations of their behaviours) that is not dependent on the specific perspective represented by the model.

What is an "inverse Turing test"?

The classical Turing test is a conceptual test designed to tell the machine from the human. An inverse Turing test is a practical test meant to tell the human from the machine in order to extract information from the human that no machine yet has.

Does an inverse Turing test work for any human and any kind of information?

No. Each inverse Turing test is designed for a specific practical purpose in a well-defined framework. It will only extract information relative to that framework from a human who is actually working in that framework.

How does an inverse Turing test work?

Exploring the parts of a model where there is space for interpretation, it pinpoints the assumptions that model-users need to be making if their conclusions about the model are to be fully justified. Those assumptions can then be formalised and soundly grafted to the model if they weren't already, or called into question.

Is the inverse Turing test meant to rule out the ambiguity in models?

No. To use it. This might indeed mean transforming some ambiguity into explicit information soundly integrated with the rest of the model. But it might also mean inspiring a flat change of model or paradigm. Or it might simply mean emphasising the ambiguity so that the familiar ways we have of dealing with it don't make us blind to it. In any case, the inverse Turing test puts some words to the ambiguity in the aim of deriving a better understanding of where it comes from so that it doesn't stop us from communicating.

When the information collected by an inverse Turing test doesn't immediately serve the user by suggesting to them new questions or new models, what will be done with it?

The information we are talking about is an intelligible record of the choices that model-users implicitly make about their particular models in order to make them work. Although those choices are fundamental aspects of models, they generally are not formally supported. This shortcoming will be addressed by the inverse Turing test results. Once multiple model-users have responded to the same test about the same model, those results will be combined and presented as statistical data for model-users to support and/or contextualise their choice of interpretation of the formalism. Model-users with a mainstream perspective will be given the explicit argument of majoritative thinking . Model-users with non-mainstream perspectives will gain the possibility, if not to prove that they are not isolated, then to prove that the subject is open for discussion. In all cases, the inverse Turing test will emphasise the idea that perspectives are not proven, at least not in the same way properties of formal systems are. It will help keep in mind the difference between different sorts of proofs involved in our scientific conclusions, and it will help maintain a space for discussion where it is still possible, if not necessary.

Who will design the inverse Turing tests?

In theory, anyone can design their own inverse Turing test. And the project aims at facilitating this by devising practical guidelines for the conception of inverse Turing tests. Eventually it will implement those guidelines into a user-friendly online application that assists the conception of tests and hosts them so that their authors can refer their peers to them and access their answers. To begin, the project will however concentrate on developing the tests devised by the main participants of the project, and experimenting with those.

Aren't inverse Turing tests going to limit the capacity of researchers by making them think like computers?

The computer-like/rational/formalisable/univocal part of the human way of thinking is what scientific and academic tradition explicitly emphasises and openly approves of. The rest, the essentially human part, it prefers to disregard. The inverse Turing test proposes to distinguish the two properly and explicitly in order to value them both. So no.

Is the aim of the transdisciplinary workshops to develop a sort of "universal scientific language"?

No. A universal method. The language will depend on the work-groups. The method is what will help define it and make it adapt to the interests we come to explore as a group.

I don't think that a universal scientific language would be any more useful than a universal inverse Turing test. We have no particular reasons to want all model-users to answer the same way to an inverse Turing test or a universal model. Similarly, we have no particular reasons to want all work-shops/-groups members to use the same vocabulary with the same univocal meaning. But locally and punctually, we want to use the fact that they do or the fact that they don't.

Can an inverse Turing test really recover the extreme complexity there is to what the scientist, the historian, the political scientist … knows, by breaking it down and codifying it into simple bits that a machine can understand?

Complexity is not necessary, all the more so if it is extreme. Expliciting just one thing that wasn't explicited until now is enough. The aim is to avoid getting stuck on a model for the wrong reasons when we can help it. Taking into account something different is enough to unlock a situation in which a model that is familiar to us is representing and confirming what we see and think much more than it is leading us to what we could.

Example

Generally, the target of the test is not the pieces of knowledge themselves, it is the logics that binds them together to produce new pieces of knowledge. Here, "expressing a piece of information in a way a machine would understand" merely means including that piece of information in a formal system that works, ie ensuring that the least part of the scientist's assignment is possible even if there otherwise is extreme complexity to be dealt with. An inverse Turing test is not an attempt at codifying what a specialist knows and understands on his/her subject so that a computer can mimic them. It is much less based on the things the specialist knows than on the things they don't and on how they deal with them.

What makes you think anything relevant can be said about a particular research subject if its specifics are bypassed?

However savant you are of your domain, at the end of the day, you are still using a human brain to draw your conclusions, the same thing I'm using, the same thing every researcher is using. The conclusions you publish in academic journals necessarily come out of the application of some kind of logic. You must be using some sort of portable form of thought that can be transferred from one human mind to another, something that I can understand even unprepared although you might have to explain the details to me. It might take us a huge amount of time, but at long last, you will succeed in explaining and I in understanding what sense you put behind a "therefore". The key is in the "bypassing" part, in how it is done.

How, as a non-specialist can you produce adequate questions for non-specialists and then exploit their answers?

Inadequate questions are not problematic in themselves. On the contrary, by asking them, we can get precious help in understanding what makes them inadequate. And this will contribute to bridging a gap that we have otherwise absolutely no rigorous scientific tools to help us deal with, namely the epistemological gap between academically circumscribed parts of science.

The data collected by the project is generally not meant to build new scientific theories. It isn't interesting in itself but in what it is going to allow us to do together. As far as this project is concerned, non-specialist questions addressed to specialists need therefore not be interpreted and rephrased in terms that make sense to the specialists, nor reciprocally, do their answers need be translated for the non-specialist to understand. For different kinds of specialists to manage to communicate with each other in a significant and constructive manner, there are many things that need be understood and shared before the specifics of each of their domains can be. Project Information in the dark is precisely dedicated to exploring, landmarking and organising those things.

References:

[CN] Causality and Networks, M.N., 2016

[PN] Perspectives and Networks, M.N., 2016