Systems

Beschreibung

Mindmap am Systems, erstellt von Cj H am 01/05/2020.
Cj H
Mindmap von Cj H, aktualisiert more than 1 year ago
Cj H
Erstellt von Cj H vor mehr als 4 Jahre
23
0

Zusammenfassung der Ressource

Systems
  1. Objects within systems
    1. variables
      1. variables are used when they are under some kind of quantifying constraint, namely either a universal quantification ∀ or an existential quantification ∃. Formulae in which all variables are quantified over are called sentences or closed formulae. Otherwise, the formulae are called open, and are said to contain free variables.
        1. Prop. variables are T or F. Used in propositional formula: a type of syntactic formula which is well formed and has a truth value. If the values of all variables in a propositional formula are given, it determines a unique truth value.
          1. The predicate calculus goes a step further than the propositional calculus to an "analysis of the inner structure of propositions"[3] It breaks a simple sentence down into two parts (i) its subject (the object (singular or plural) of discourse) and (ii) a predicate (a verb or possibly verb-clause that asserts a quality or attribute of the object(s)). The predicate calculus then generalizes the "subject|predicate" form (where | symbolizes concatenation (stringing together) of symbols) into a form with the following blank-subject structure " ___|predicate", and the predicate in turn generalized to all things with that property.
            1. propositional variable is intended to represent an atomic proposition (assertion), such as "It is Saturday" = p
            2. Truth value: analytic or synthetic. Empiricits hold that to arrive at the truth-value of a synthetic proposition, meanings (pattern-matching templates) must first be applied to the words, and then these meaning-templates must be matched against whatever it is that is being asserted. 0 or 1 used in science, assigned externally
              1. Second order variables: A sort of variables that range over sets of individuals. If S is a variable of this sort and t is a first-order term then the expression t ∈ S (also written S(t), or St to save parentheses) is an atomic formula. Sets of individuals can also be viewed as unary relations on the domain.
                1. For each natural number k there is a sort of variables that ranges over all k-ary relations on the individuals. If R is such a k-ary relation variable and t1,...,tk are first-order terms then the expression R(t1,...,tk) is an atomic formula.
                  1. For each natural number k there is a sort of variables that ranges over all functions taking k elements of the domain and returning a single element of the domain. If f is such a k-ary function variable and t1,...,tk are first-order terms then the expression f(t1,...,tk) is a first-order term.
                  2. Second-order existential quantification lets us say that functions or relations with certain properties exists. Second-order universal quantification lets us say that all subsets of, relations on, or functions from the domain to the domain have a property. In first-order logic, we can only say that the subsets, relations, or functions assigned to one of the non-logical symbols of the language have a property
                    1. In first-order logic we can define the identity relation Id|M| (i.e., {ha, ai : a ∈ |M|}) by the formula x = y. In second-order logic, we can define this relation without =. For if a and b are the same element of |M|, then they are elements of the same subsets of |M| (since sets are determined by their elements). Conversely, if a and b are different, then they are not elements of the same subsets: e.g., a ∈ {a} but b /∈ {a} if a 6= b. So “being elements of the same subsets of |M|” is a relation that holds of a and b iff a = b. It is a relation that can be expressed in second-order logic, since we can quantify over all subsets of |M|. Hence, the following formula defines Id|M| : ∀X (X(x) ↔ X(y))
                  3. if we use a predicate symbol Rx, to mean “x is red”, then we write: ∀xRx
                    1. A constant in logic is a symbol which refers to a specific object. It is essentially a proper noun in a logical language. Constants are usually used in association with predicate symbols, which add semantic meaning to logical expressions. For instance, we may define the predicate symbol Ex as “x eats”, where x is a variable. Then, we may say Ew to express “Wanda eats”.
                      1. Classes of objects:
                        1. type theory: every "term" has a "type" and operations are restricted to terms of a certain type.
                          1. concepts like "and" and "or" can be encoded as types in the type theory itself.
                            1. The term 2+1 reduces to 3. Since 3 cannot be reduced further, it is called a normal form.
                              1. For a normalizing system, some borrow the word element from set theory and use it to refer to all closed terms that can reduce to the same normal form. A closed term has no parameters. (A term like x+1 with its parameter x is called an open term.)
                                1. Two terms are convertible if they can be reduced to the same term.
                                  1. A dependent type is a type that depends on a term or another type. Thus, the type returned by a function may depend on the argument to the function.
                                    1. Having a type for equality is important because it can be manipulated inside the system. In intuitionistic type theory, the equality type (also called the identity type) is known as I for identity. There is a type I A a b when A is a type and a and b are both terms of type A. A term of type I A b a is interpreted as meaning that a is equal to b.
                                      1. Some systems build them out of functions using Church encoding. Other systems have inductive types: a set of base types and a set of type constructors that generate types with well-behaved properties. For example, certain recursive functions called on inductive types are guaranteed to terminate. Coinductive types are infinite data types created by giving a function that generates the next element(s). See Coinduction and Corecursion. Induction-induction is a feature for declaring an inductive type and a family of types which depends on the inductive type. Induction recursion allows a wider range of well-behaved types, allowing the type and recursive functions operating on it to be defined at the same time.
                                        1. type theories have a "universe type", which contains all other types (and not itself).
                          2. Ways to sort within a system:
                            1. Number
                              1. Species
                                1. Relations
                                  1. Functions arise from relations between ojects
                                    1. elemnts P stand in relation to R, so that behavior of p in R is different in behavior than in another relation, R'. If the behavior of R is same as R' then there is no itneraciton.
                                      1. pg 57 of gen sys theory for math proofs
                                        1. Growth of system proportional to number of elements present
                                          1. Progress is possible only by passing from a state of undifferentiated wholeness to a differentiation of parts.
                                            1. We term a system 'closed' if no material enters or leave it; it is called 'open' if there is import and export of material.
                                              1. In any closed system, the final state is unequivocally determined by the initial conditions. This is not so in open systems. Here, the same final state may be reached from different initial conditions and in different ways. This is what is called equifinality.
                                              2. Biologically, life is not maintenance or restoration of equilibrium but is essentially maintenance of disequilibria, as the doctrine of the organism as open system reveals. Reaching equilibrium means death and consequent decay.
                                                1. Static teleology: certain arrangement is useful for a purpose
                                                  1. Dynamic telelogy: directivenss of processes
                                                    1. Directions of event toward a final state, present behavior dependent with final state. Discrete systems dependent on time
                                                      1. Direction based on structure: an arragement of parts leads the processes in system to a certain result
                                                        1. Feedback mechanisms: reaction to output, certain amount is monitored back as information to input to direct and stabilize etc
                                                          1. Human finality: behavior determined by goal
                                              3. Memory: A system with memory has outputs that depend on previous (or future) inputs. Example of a system with memory: y(t)=x(t−π) Example of a system without memory: y(t)=x(t)
                                                1. Invertibility An invertible system is one in which there is a one-to-one correlation between inputs and outputs. Example of an invertible system: y(t)=x(t) Example of a non-invertible system: y(t)=|x(t)| In the second example, both x(t) = -3 and x(t) = 3 yield the same result.
                                                  1. Causality A causal system has outputs that only depend on current and/or previous inputs. Example of a causal system: y(t)=x(t)+x(t−1) Example of a non-causal system: y(t)=x(t)+x(t+1)
                                                    1. Stability There are many types of stability, for this course, we first consider BIBO (Bounded Input Bounded Output) stability. A system is BIBO stable if, for all bounded inputs (∃BϵR,|x(t)|<B), the output is also bounded (|y(t)|<∞)
                                                      1. Time Invariance A system is time invariant if a shift in the time domain corresponds to the same shift in the output. Example of a time invariant system: y1(t)=x1(t)↦y2(t−t0)=x2(t−t0) Example of a time variant system: y1(t)=sin(t)x1(t)↦y2(t−t0)=sin(t)x2(t−t0) In the first example, y2 is the shifted version of y1. This is not true of the second example.
                                                        1. Linearity A system is linear if the superposition property holds, that is, that linear combinations of inputs lead to the same linear combinations of the outputs. A system with inputs x1 and x2 and corresponding outputs y1 and y2 is linear if: ax1+bx2=ay1+by2 for any constants a and b. Example of a linear system: y(t)=10x(t) Example of a nonlinear system: y(t)=x(t)2
                                                          1. Fixing initial or boundary conditions means simply taking the system out of its context, separating it from the rest of the universe. There are no factors, other than the boundary conditions, that infuence the motion of the system from the outside.
                                                            1. Interactions between elements happen specifcally. They take place on networks. Often, interactions are discrete; they either happen or not. Often, systems operate algorithmically. • Complex systems are out-of-equilibrium, which makes them hard to deal with analytically. If they are self-organized critical, the statistics behind them can be understood. • Many complex systems follow evolutionary dynamics. As they progress, they change their own environment, their context, or their boundary conditions. • Many complex systems are adaptive and robust at the same time. They operate at the ‘edge of chaos’. Self-organized criticality is a mechanism that regulates systems towards their edge of chaos. • Most evolutionary complex systems are path-dependent and have memory. They are therefore non-ergodic and non-Markovian. The adjacent possible is a way of conceptualizing the evolution of the ‘reachable’ phasespace.
                                                                1. • Probabilities are abstract measures that measure our chances of observing specifc events before sampling them. Histograms and relative frequencies (normalized histograms) are empirical distribution functions that we observe when we are sampling a process. • Probability distribution functions can have a mean, a variance, and higher moments. Empirical distribution functions have a sample mean (average) and a sample variance. • Probability distributions of more than one variable are called joint probabilities. • A conditional probability is the ratio between a joint probability and one of its marginal probabilities, the condition. • Two random variables are statistically independent if the distribution function of one variable does not depend on the outcome of the other. • Frequentist hypothesis testing uses a null hypothesis and a test statistic to test if the null hypothesis for the given data can be rejected at a specifed confdence level. If not, the hypothesis is not rejectable at t
                                                                  1. The concept of limits is essential for dealing with stochastic processes. Intuitively, a limit means that elements of a sequence of ‘objects’ become increasingly ‘similar’ to a particular ‘limit-object’, until the elements become virtually indistinguishable from the limit object—the sequence converges to its limit. The similarity can be defned in several ways, it has to be clarifed beforehand which similarity measure is to be used. What is a limit of a sequence of random variables? Assume the random variables X(n), n = 1, 2, 3···, over the sample space "(X(n)) = {1,··· ,W}, and a random variable OUP UNCORRECTED PROOF – FIRST PROOF, 5/6/2018, SPi 54 Probability and Random Processes Y over the same "(X(n)). What do we mean by a limit limn→∞ X(n) → Y of random variables? For two real numbers one can easily visualize what it means to converge. One possibility for giving meaning to a limit for random numbers is to consider limits of distribution functions. If the probability qi(n) = P(X(n)
                                                                    1. = i) and q¯i = P(Y = i), then the limit X(n) → Y can be understood in terms of the limit limt→∞ qi(n) → q¯i for all i ∈ ". This is the so-called point-wise limit of distribution functions.
                                                                      1. Almost all complex systems involve history- or path-dependent processes. In general they are poorly understood. Path-dependent processes are processes t → Xt, where the probability of an outcome at the next time step is a function of the history of prior outcomes. P(XN+1 = xN+1|xN) = f(x1,x2,··· ,xN). (2.79) It produces sequences x(N). There are ‘mild’ versions of path-dependent processes that depend not on the exact order of the sequence, but only on the histogram of x(N), and P(XN+1 = xN+1|xN) = f(k(x(N))). Processes of this kind can be partially understood. We will discuss them in Section 6.5.2.3. If path-dependent processes also depend on the order of the sequence, then it is quite diffcult to understand their statistics. In contrast to Markov processes, path-dependent processes are frequently nonergodic, which means in this context that time averages and ensemble averages do not coincide.
                                                          2. system can have a leading part, p, that it is centered around. change in p changes rest. ie an individual . Progressive individualization: certain parts gain a central role over time and this is part of the overall ordering and rank of objects. LOA: see as infdividual parts or whole, nestedly
                                                          3. http://maxim.ece.illinois.edu/teaching/fall08/lec3.pdf
                                                        2. Semantics + systems?
                                                          1. Linguistics
                                                            1. The association of relational words such as adjectives and verbs in a corpus with matrices produces a large amount of matrix data, and raises the question of characterising the information present in this data. Matrix distributions have been studied in a variety of areas of applied and theoretical physics.

                                                              Anmerkungen:

                                                              • https://arxiv.org/pdf/1703.10252.pdf
                                                              1. The starting point of tensor-based models of language is a formal analysis of the grammatical structure of phrases and sentences. These are then combined with a semantic analysis, which assigns meaning representations to words and extends them to phrases and sentences compositionally, based on the grammatical analysis: Grammatical Structure =⇒ Semantic Representation
                                                                1. The grammatical structure of language has been made formal in different ways by different linguists. We have the work of Chomsky on generative grammars [20], the original functional systems of Ajdukiewicz [21] and Bar-Hillel [22] and their type logical reformulation by Lambek in [23]. There is a variety of systems that build on the work of Lambek, among which the Combinatorial Categorial Grammar (CCG) of Steedman [24] is the most widespread. These latter models employ ordered algebras, such as residuated monoids, elements of which are interpreted as function-argument structures. As we shall see in more detail below, they lend themselves well to a semantic theory in terms of vector and tensor algebras via the map-state duality: Ordered Structures =⇒ Vector and Tensor Spaces
                                                                  1. There are a few choices around when it comes to which formal grammar to use as base. We discuss two possibilities: pregroup grammars and CCG. The contrast between the two lies in the fact that pregroup grammars have an underlying ordered structure and form a partial order compact closed category [25], which enables us to formalise the syntaxsemantics passage as a strongly monoidal functor, since the category of finite dimensional vector spaces and linear maps is also compact closed (for details see [5, 26, 27]). So we have: Pregroup Algebras strongly monoidal functor =⇒ Vector and Tensor Spaces In contrast, CCG is based on a set of rules, motivated by the combinatorial calculus of Curry [28]that was developed for reasoning about functions in arithmetics and extended and altered for purposes of reasoning about natural language constructions. The CCG is more expressive than pregroup grammars: it covers the weakly context sensitive fragment of language [29], whereas pregroup grammars cov

                                                                    Anmerkungen:

                                                                    • https://books.google.com/books?id=wbWJ3hXrUDEC&amp;pg=PA80&amp;lpg=PA80&amp;dq=haskell+curry+linguistics&amp;source=bl&amp;ots=uneFpvhdGb&amp;sig=ACfU3U1EMpRVYRvsABijnvz-1eyNPsNiAQ&amp;hl=en&amp;sa=X&amp;ved=2ahUKEwi9hMz7urHpAhXNl-AKHYw5AfMQ6AEwA3oECAcQAQ#v=onepage&amp;q=haskell%20curry%20linguistics&amp;f=false
                                                                    1. SEE NOTE for linguistic and see more Haskell as mentioned in paper

                                                                      Anmerkungen:

                                                                      • https://plato.stanford.edu/entries/computational-linguistics/
                                                                      1. See 3.3 pregroup grammars for a start with process of analysis
                                                                2. Semantics
                                                                  1. Does premise P justify an inference to hypothesis H? ○ An informal, intuitive notion of inference: not strict logic ○ Focus on local inference steps, not long chains of deduction ○ Emphasis on variability of linguistic expression

                                                                    Anmerkungen:

                                                                    • https://nlp.stanford.edu/~wcmac/papers/20150410-UPenn-NatLog.pdf
                                                                    1. Robust, accurate natural language inference could enable: ○ Semantic search H: lobbyists attempting to bribe U.S. legislators P: The A.P. named two more senators who received contributions engineered by lobbyist Jack Abramoff in return for political favors ○ Question answering [Harabagiu & Hickl 06] H: Who bought JDE? P: Thanks to its recent acquisition of JDE, Oracle will ... ○ Document summarization
                                                                      1. Cf. paraphrase task: do sentences P and Q mean the same? ○ natural language inference: P → Q? Paraphrase: P ↔ Q
                                                                        1. If you can’t recognize that P implies H, then you haven’t really understood P (or H)
                                                                          1. Thus, a capacity for natural language inference is a necessary (though probably not sufficient) condition for real NLU
                                                                            1. A new(ish) model of natural logic ○ An algebra of semantic relations ○ An account of compositional entailment ○ A weak proof procedure
                                                                              1. Algebra for process:

                                                                                Anmerkungen:

                                                                                • https://www.kornai.com/Papers/mol11.pdf https://hal.archives-ouvertes.fr/hal-00962023/document https://www.researchgate.net/publication/263891528_On_Semantic_Algebra_A_Denotational_Mathematics_for_Cognitive_Linguistics_Machine_Learning_and_Cognitive_Computing https://web.stanford.edu/~jurafsky/slp3/16.pdf
                                                                                1. A weak proof procedure 1. Find sequence of edits connecting P and H ○ Insertions, deletions, substitutions, … ○ E.g., by using a monolingual aligner [MacCartney et al. 2008] 2. Determine lexical semantic relation for each edit ○ Substitutions: depends on meaning of substituends: cat | dog ○ Deletions: ⊏ by default: red socks ⊏ socks ○ But some deletions are special: not hungry ^ hungry ○ Insertions are symmetric to deletions: ⊐ by default 3. Project up to find semantic relation across each edit 4. Join semantic relations across sequence of edits
                                                                                  1. The RTE3 test suite ● More “natural” NLI problems; much longer premises ● But not ideal for NatLog ○ Many kinds of inference not addressed by NatLog: paraphrase, temporal reasoning, relation extraction, … ○ Big edit distance ⇒ propagation of errors from atomic model
                                                                                    1. Problem: NatLog is too precise? ● Error analysis reveals a characteristic pattern of mistakes: ○ Correct answer is yes ○ Number of edits is large (>5) (this is typical for RTE) ○ NatLog predicts ⊏ or ≡ for all but one or two edits ○ But NatLog predicts some other relation for remaining edits! ○ Most commonly, it predicts ⊐ for an insertion (e.g., “acts as”) ○ Result of relation composition is thus #, i.e. no ● Idea: make it more forgiving, by adding features ○ Number of edits ○ Proportion of edits for which predicted relation is not ⊏ or ≡
                                                                                      1. Natural Logic & Shallow features B. MacCartney and C. D. Manning “An extended model of natural logic.” presents a Natural Logic system (NatLog). The system implements what semanticists have discovered about the some semantics properties of linguistic expressions that effects inference, like monotonicity of quantifier phrases (e.g. nobody) or other expressions (e.g. without):
                                                                                    1. Textual Entailment Textual entailment recognition is the task of deciding, given two text fragments, whether the meaning of one text is entailed from another text. Useful for QA, IR, IE, MT, RC (reading comprehension), CD (comparable documents). Given a text T and an hypothesis H T |= H if, typically, a human reading T would infer that H is most probably true. • QA application: T: “Norways most famous painting, The Scream by Edvard Munch, was recovered Saturday, almost three months after it was stolen from an Oslo museum.” H: “Edvard Munch painted The Scream” • IR application: T: “Google files for its long awaited IPO”. H: “Google goes public.”
                                                                                      1. . Kind of inferences Benchmarks and an evaluation forum for entailment systems have been developed. Main kind of inferences: syntactic inference: e.g. nominalization: T: Sunday’s election results demonstrated just how far the pendulum of public opinion has swung away from faith in Koizumi’s promise to bolster the Japanese economy and make the political system more transparent and responsive to the peoples’needs. H: Koizumi promised to bolster the Japanese economy. other syntactic phenomena: apposition (), predicate-complement construction, coordination, embedded clauses, etc. lexically based inferences: simply based on the presence of synonyms or the alike. phrasal-level synonymy: T: “The three-day G8 summit . . . ” and H: “The G8 summit last three days”
                                                                                        1. RTE PASCAL: observation Approaches used are combination of the following techniques: • Machine Learning Classification systems • Transformation-based techniques over syntactic representations • Deep analysis and logical inferences • Natural Logic Overview of the task and the approaches: Ido Dagan, Bill Dolan, Bernardo Magnini, and Dan Roth. “Recognizing textual entailment: Rational, evaluation and approaches” Lucy Vanderwende and William B. Dolan: “What Syntax can Contribute in the Entailment Task.” found that in the RTE dataset: • 34% of the test items can be handled by syntax, • 48% of the test items can be handled by syntax plus a general purpose thesaurus
                                                                                          1. Back to philosophy of language Frege: 1. Linguistic signs have a reference and a sense: (i) “Mark Twin is Mark Twin” vs. (ii) “Mark Twin is Samuel Clemens”. (i) same sense and same reference vs. (ii) different sense and same reference. 2. Both the sense and reference of a sentence are built compositionaly. 3. “sense” what is common to the sentences that yield the same consequent. 4. knowing a concept means to know in which network of concepts it lives in. Lead to the Formal Semantics studies of natural language who focused on “meaning” as “reference”. Wittgenstein’s claims brought philosophers of language to focus on “meaning” as “sense” leading to the “language as use” view.
                                                                                            1. Back to words But, the “language as use” school has focused on content words meaning. vs. Formal semantics school has focused mostly on the grammatical words and in particular on the behaviour of the “logical words”. • content words or open class: are words that carry the content or the meaning of a sentence and are open-class words, e.g. noun, verbs, adjectives and most adverbs. • grammatical words or closed class: are words that serve to express grammatical relationships with other words within a sentence; they can be found in almost any utterance, no matter what it is about, e.g. such as articles, prepositions, conjunctions, auxiliary verbs, and pronouns. Among the latter, one can distinguish the logical words, viz. those words that corresponds to logical operators: negation, conjunction, disjunction, quantifiers.
                                                                                              1. Logical words: Quantifiers Logical words are not used always as their corresponding logic operators. E.g: “Another step and I shoot you” (= if you take another step, I shoot you.) Quantifiers are typical logical words. • In the previous lectures we have seen them at work on the syntax-semantic interface, putting attention on the scope ambiguity. • In the following, we will see what has been understood about quantifiers from a Formal Semantics perspective and from an empirical view.
                                                                                                1. Quantifiers from the FS angles Formal semantics have studied the properties that characterize all natural language determiners so to demarcate their class with the class of all logically possible determiners, and explain facts as the following: not all QP can be negated: • not every man • *not few man Further info in B. Partee, A. ter Meulen and R. Wall “Mathematical methods in linguistics”
                                                                                                  1. Conservativity and Extension Conservativity DET is conservative if DET(NP0 )(Pred0 ) is true iff DET(NP0 )(NP0 ∩Pred0 ) is true. Extension DET has extension if DET(NP0 )(Pred0 ) remains true if the size of the universe outside changes. For further information: A. Szabolsci “Quantification”, 2010
                                                                                                    1. Symmetry “Q A are B” iff “Q B are A”. ???? symmetric quantifiers are: some, no, at least five, exactly three, an even number of, infinitely many; non-symmetric: all, most, at most one-third of the. Symmetry is a feature of (most of) the quantifiers allowed in so-called existential there sentences “There are at least five men in the garden is fine” vs. “*There are most men in the garden is not.”
                                                                                                      1. Effects of Monotonicity Licensors of NPI Monotonicity is crucial for explaining the distribution of polarity items: “No one will ever succeed” vs. *”Someone will ever succeed” Constrain on coordination NPs can be coordinated by conjunction and disjunction iff they have the same direction of monotonicity, where but requires NPs of different monotonicity (but it does not allow iteration freely) • *John or no student saw Jane. • *All the woman and few men walk. • John but no student saw Jane. • All the woman, few men but several student walk
                                                                                                      2. . Quantifiers from the “language as use” angle Quantifiers have been studied in details from the FS angle but have been mostly ignored by the empirical based community which has focused on content words. They have been studied in Pragmatics and Psycholinguistics. • QP and scalar implicature • QP and performative action • QP and anaphora Conjecture: The results found in the pragmatics and psycholinguistics research suggest that DS can bring highlight on QP uses.
                                                                                                    2. Natural language syntactic structures convey information to grasp their semantic. • Many tasks involving natural language need some level of semantic understanding. (e.g. NLDB, QA, TE). • FOL based systems can be used for domain specific and controlled natural language applications. • For open domain and free text application, needs to combine relational representation and ML methods • Logical words have been mostly ignored both by the shallow system developers and by the empirically driven theoreticians.
                                                                                            2. There is in my opinion no important theoretical difference between natural languages and the artificial languages of logicians; indeed I consider it possible to comprehend the syntax and semantics of both kinds of languages with a single natural and mathematically precise theory. (Montague 1970c, 222)

                                                                                              Anmerkungen:

                                                                                              • https://plato.stanford.edu/entries/montague-semantics/
                                                                                              1. while teaching introductory logic courses. Standard in such courses are exercises in which one is asked to translate natural language sentences into logic. To answer such exercises required a bilingual individual, understanding both the natural language and the logic. Montague provided, for the first time in history, a mechanical method to obtain these logical translations. About this, Montague said: It should be emphasized that this is not a matter of vague intuition, as in elementary logic courses, but an assertion to which we have assigned exact significance. (Montague 1973, 266)
                                                                                                1. See link for all principles
                                                                                                  1. An important consequence of the principle of compositionality is that all the parts which play a role in the syntactic composition of a sentence, must also have a meaning. And furthermore, each syntactic rule must be accompanied by a semantic rule which says how the meaning of the compound is obtained. Thus the meaning of an expression is determined by the way in which the expression is formed, and as such the derivational history plays a role in determining the meaning.
                                                                                                    1. The Proper Treatment of Quantification in Ordinary English’ (Montague 1973).
                                                                                                      1. Consider the two sentences John finds a unicorn and John seeks a unicorn. These are syntactically alike (subject-verb-object), but are semantically very different. From the first sentence follows that there exists at least one unicorn, whereas the second sentence is ambiguous between the so called de dicto reading which does not imply the existence of unicorns, and the de re reading from which existence of unicorns follows. The two sentences are examples of a traditional problem called ‘quantification into intensional contexts’. Traditionally the second sentence as a whole was seen as an intensional context, and the novelty of Montague's solution was that he considered the object position of seek as the source of the phenomenon. He formalized seek not as a relation between two individuals, but as a relation between an individual and a more abstract entity, see Section 2.2. Under this analysis the existence of a unicorn does not follow. The de re reading is obtained in a different way,
                                                                                                        1. It was Montague's strategy to apply to all expressions of a category the most general approach, and narrow this down, when required, by meaning postulates. So initially, find is also considered to be a relation between an individual and such an abstract entity, but some meaning postulate restricts the class of models in which we interpret the fragment to only those models in which the relation for find is the (classical) relation between individuals.
                                                                                                          1. As a consequence of this strategy, Montague's paper has many meaning postulates. Nowadays semanticists often prefer to express the semantic properties of individual lexical items directly in their lexical meaning, and then find is directly interpreted as a relation between individuals. Nowadays meaning postulates are mainly used to express structural properties of the models (for instance the structure of time axis), and to express relations between the meanings of words. For a discussion of the role of meaning postulates, see Zimmermann 1999.
                                                                                                            1. Montague proposed the denotation of a descriptive phrase to be a set of properties. For instance, the denotation of John is the set consisting of properties which hold for him, and of every man the set of properties which hold for every man. Thus they are semantically uniform, and then conjunction and/or disjunction of arbitrary quantifier phrases (including e.g. most but not all) can be dealt with in a uniform way. This abstract approach has led to generalized quantifier theory, see Barwise & Cooper 1981 and Peters & Westerståhl 2006. By using generalized quantifier theory a remarkable result has been achieved. It concerns ‘negative polarity items’: words like yet and ever. Their occurrence can be licensed by negation: The 6:05 has arrived yet is out, whereas The 6:05 hasn't arrived yet is OK.
                                                                                                            2. Two features of the Montague's ‘intensional logic’ attracted attention. It is a higher order logic. In those days, linguists, philosophers and mathematicians were only familiar with first order logic (the logic in which there are only variables for basic entities). Since in Montague semantics the parts of expressions must have meaning too, a higher order logic was needed (we have already seen that every man denotes a set of properties). The logic has lambda abstraction, which in Montague's days was not a standard ingredient of logic. The lambda operator makes it possible to express with higher order functions, and the operator made it possible to cope differences between syntax and semantics.
                                                                                                              1. Therefore philosophers of language saw, in those days, the role of logic as a tool to improve natural language. montague is just analysis, but i can make imrpovement based on manague
                                                                                                                1. This motivation for using translations (a tool for obtaining perspicuous representations of meanings) has certain consequences.
                                                                                                                  1. Translation is a tool to obtain formulas which represent meanings. Different, but equivalent formulas are equally acceptable. In the introduction of this article it was said that Montague grammar provided a mechanical procedure for obtaining the logical translation. As a matter of fact, the result of Montague's translation of Every man runs is not identical with the traditional translation, although equivalent with it, see the example in Section 4.1.
                                                                                                                    1. The translation into logic should be dispensable. So in Montague semantics there is nothing like ‘logical form’(which plays such an important role in the tradition of Chomsky). For each syntactic rule which combines one or more expressions there is a corresponding semantic rule that combines the corresponding representations of the meanings. This connection is baptized the rule-to-rule hypothesis (Bach 1976). Maybe it is useful to emphasize that (in case the syntactic operation is meaning preserving) the corresponding semantic rule can be the identity mapping .
                                                                                                                      1. Operations depending on specific features of formulas are not allowed. Janssen (1997) criticized several proposals on this aspect. He showed that proposals that are deficient in this respect are either incorrect (make wrong predictions for closely related sentences), or can be corrected and generalized, and thus improved.
                                                                                                                        1. Montague defined the denotation of a sentence as a function from possible worlds and moments of time to truth values. Such a function is called an ‘intension’. As he said (Montague 1970a, 218), this made it possible to deal with the semantics of common phenomena such as modifiers, e.g. in Necessarily the father of Cain is Adam. Its denotation cannot be obtained from the truth value of The father of Cain is Adam : one has to know the truth value for other possible worlds and moments of time
                                                                                                                          1. ox and Lappin (2005) review more recent ones. The latter authors explain that there are two strategies: the first is to introduce impossible worlds in which woodchuck and groundhog are not equivalent, and the second is to introduce an entailment relation with the property that identity does not follow from reciprocal entailment. Fox and Lappin follow the second strategy.
                                                                                                                            1. In Montague's approach possible worlds are basic objects without internal or external structure. Phenomena having to do with belief, require external structure, such as an accessibility relation for belief-alternatives. Counterfactuals require a distance notion to characterize worlds which differ minimally from each other. Structures on possible worlds are used frequently. Sometimes an internal structure for possible worlds is proposed. A possible world determines a set of propositions (those propositions which are true with respect to that world), and in Fox and Lappin 2005, the reverse order is followed. They have propositions as primitive notions, and define possible worlds on the basis of them. Also Cresswell (1973) provides a method for obtaining possible worlds with internal structure: he describes how to build possible worlds from basic facts. None of these proposals for internal structure have been applied by other authors than the proposers.
                                                                                                                              1. The philosophical status of certain entities is not so clear, such as pains, tasks, obligations and events. These are needed when evaluating sentences like e.g. Jones had a pain similar to the one he had yesterday. In ‘On the nature of certain philosophical entities’ (Montague 1969), Montague describes how these notions can be described using his intentional logic; they are properties of moments of time in a possible world. Of these notions, only events occur in papers by other authors, albeit not in the way Montague suggested. They are seen as basic, but provided with an algebraic structure allowing, e.g., subevents (Link 1998, ch. 10–12; Bach 1986a).
                                                                                                                                1. The set E may include whatever one would like to consider as basic entities: numbers, possible objects, and possible individuals. Whether an individual is considered to be really living or existing at a certain moment of time or in a certain possible world is not given directly by the model; one has to introduce a predicate expressing this. Normally the set E has no internal structure, but for mass nouns (which have the characteristic property that any part of water is water), a structure is needed, see Pelletier & Schubert 2003. Also plurals might evoke a structure on the set E, e.g. when sum-individuals are used (see Link 1983, 1998 (ch. 1–4), and Bach 1986a). Also when properties (loving John) are considered as entities for which predicates may hold (Mary likes loving John) structure is needed: property theory gives the tools to incorporate them (see Turner 1983).
                                                                                                      2. https://web.stanford.edu/class/linguist236/materials/ling236-handout-04-11-natlog.pdf
                                                                                                        1. Monotonicity Reasoning, i.e., Predicate Replacement, (b) Conservativity, i.e., Predicate Restriction, and also (c) Algebraic Laws for inferential features of specific lexical items.
                                                                                                          1. CONCLUSION 1: The rules of grammar, which generate the grammatical sentences of English, filtering out the ungrammatical sentences, are not distinct from the rules relating the surface forms of English sentences to their corresponding logical forms. [. . . ] At present, the theory of generative semantics is the only theory of grammar that has been proposed that is consistent with conclusion 1.
                                                                                                            1. humans do not carry anything analogous to infinite sets of possible worlds or situations around in their heads, so the study of deduction — inferential relations based on syntactic properties of some kind of “representations” of denotations — are [sic] potentially of relevance to the psychology of language and to computational processing of meaning in a way that model-theoretic semantics alone is not.
                                                                                                              1. In earlier writings I had raised some worries about the possible psychological reality of possible worlds which in retrospect I think arose partly from misguided concern with trying to fit possible-worlds theory “inside the speaker’s head”: but we don’t try to fit the theory of gravitation “inisde a falling apple”. Nevertheless, linguists do care about the mechanisms, and if possible-worlds semantics is a reasonable theory about the language user’s semantic competence, there still have to exist some internal mechansisms partly by virtue of which the external theory is correct
                                                                                                                1. Properties in common • Commitment to compositionality • A rich lexicon and a small stock of composition rules • Basis in higher-order logic • Aspirations to direct interpretation of surface forms
                                                                                                                  1. Differences • Emphasis on – Possible worlds: Model theory – Natural logic: Deduction • The lexicon – Possible worlds: Meanings as truth functions are primary – Natural logic: Relations between meanings are primary • Composition – Possible worlds: emphasis on function application between lexical meanings – Natural logic: emphasis on function composition between lexical relationships
                                                                                                                    1. https://plato.stanford.edu/entries/possible-worlds/
                                                                                                                2. For a semanticist, the most obvious approach to NLI relies on full semantic interpretation: first, translate p and h into some formal meaning representation, such as firstorder logic (FOL), and then apply automated reasoning tools to determine inferential validity. While the formal approach can succeed in restricted domains, it struggles with open-domain NLI tasks such as RTE. For example, the FOL-based system of (1) [=Bos & Markert 2005 —CP] was able to find a proof for less than 4% of the problems in the RTE1 test set. The difficulty is plain: truly natural language is fiendishly complex. The formal approach faces countless thorny problems: idioms, ellipsis, paraphrase, ambiguity, vagueness, lexical semantics, the impact of pragmatics, and so on. Consider for a moment the difficulty of fully and accurately translating (1) to a formal meaning representation. Yet (1) also demonstrates that full semantic interpretation is often not necessary to determining inferential validity.
                                                                                                                  1. Semantic truth conditionals

                                                                                                                    Anmerkungen:

                                                                                                                    • algebra:  https://www.cl.cam.ac.uk/teaching/1011/L107/semantics.pdf   http://www.ccs.neu.edu/home/riccardo/papers/carpenter-logical.pdf
                                                                                                                    1. There are two aspects to semantics. The first is the inferences that language users make when they hear linguistic expressions. We are all aware that we do this and may feel that this is what understanding and meaning are. But there is also the question of how language relates to the world, because meaning is more than just a mental phenomenon – the inferences that we make and our understanding of language are (often) about the external world around us and not just about our inner states. We would like our semantic theory to explain both the ‘internal’ and ‘external’ nature of meaning.
                                                                                                                      1. Truth-conditional semantics attempts to do this by taking the external aspect of meaning as basic. According to this approach, a proposition is true or false depending on the state of affairs that obtain in the world and the meaning of a proposition is its truth conditions. For example, John is clever conveys a true proposition if and only if John is clever. Of course, we are not interested in verifying the truth or falsity of propositions – we would get into trouble with examples like God exists if we tried to equate meaning with verification. Rather knowing the meaning of a proposition is to know what the world would need to be like for the sentence to be true (not knowing what the world actually is like). The idea is that the inferences that we make or equivalently the entailments between propositions can be made to follow from such a theory.
                                                                                                                        1. Most formal approaches to the semantics of NL are truth-conditional and modeltheoretic; that is, the meaning of a sentence is taken to be a proposition which will be true or false relative to some model of the world. The meanings of referring expressions are taken to be entities / individuals in the model and predicates are functions from entities to truth-values (ie. the meanings of propositions). These functions can also be characterised in an ‘external’ way in terms of sets in the model – this extended notion of reference is usually called denotation. Ultimately, we will focus on doing semantics in a proof-theoretic way by ‘translating’ sentences into formulas of predicate / first-order logic (FOL) and then passing these to a theorem prover since our goal is automated text understanding. However, it is useful to start off thinking about model theory, as the validity of rules of inference rests on the model-theoretic intepretation of the logic.
                                                                                                                          1. Models (systems?) The particular approach to truth-conditional semantics we will study is known as model-theoretic semantics because it represents the world as a mathematical abstraction made up of sets and relates linguistic expressions to this model. This is an external theory of meaning par excellence because every type of linguistic expression must pick out something in the model. For example, proper nouns refer to objects, so they will pick out entities in the model. (Proof theory is really derivative on model theory in that the ultimate justification of a syntactic manipulation of a formula is that it always yields a new formula true in such a model.)
                                                                                                                            1. Truth-conditional semantics attempts to capture the notion of meaning by specifying the way linguistic exressions are ‘linked’ to the world. For example, we argued that the semantic value of a sentence is (ultimately) a proposition which is true or false (of some state of affairs in some world). What then are the semantic values of other linguistic expressions, such as NPs, VPs, and so forth? If we are going to account for semantic productivity we must show how the semantic values of words are combined to produce phrases, which are in turn combined to produce propositions. It is not enough to just specify the semantic value of sentences.
                                                                                                                              1. The problem of the denotation of sentences brings us back to the internal and external aspects of meaning again. What we want to say is that there is more to the meaning of a sentence than the truth-value it denotes in order to distinguish between different true (or false) sentences. T
                                                                                                                                1. At this point you might feel that it is time to give up truth-conditional semantics, because we started out by saying that the whole idea was to explain the internal aspect of meaning in terms of the external, referential part. In fact things are not so bad because it is possible to deal with those aspects of meaning that cannot be reduced to reference in model-theoretic, truth-conditional semantics based on an intensional ‘possible worlds’ logic.
                                                                                                                                  1. Propositional logic (PL) addresses the meaning of sentence connectives by ignoring the internal structure of sentences / propositions entirely. In PL we would represent (10) and many other English sentences as p ∧ q where p and q stand in for arbitrary propositions. Now we can characterise the meaning of and in terms of a truth-table for ∧ which exhausts the set of logical possibilities for the truth-values of the conjoined propositions (p, q) and specifies the truth of the complex proposition as a function of the truth-values for the simple propositions, as below:
                                                                                                                                    1. We can define entailment, contradiction, synonymy, and so forth in terms of the notion of possible models for a language. Lets start with contradictory propositions. The proposition conveyed by (13) is a contradiction in F1, because it is impossible to construct a model for F1 which would make it true.
                                                                                                                                      1. To model relations set-theoretically we need to introduce a notation for ordered pairs, triples and ultimately n-ary relations. <belinda1 max1> is an ordered pair which might be in the set denoted by love1. Notice that the two-place predicate love1 is not a symmetric relation so it does not follow (unfortunately) that <max1 belinda1> will also be a member of this set. So a formula P(t1, t2, tn) will be true in a model M iff the valuation function, F(P) yields a set containing the ordered entities denoted by terms, t1, t2, tn. Otherwise a model for FOL is identical to the model we developed for F1. FOL allows us to make general statements about entities using quantifiers. The interpretation of the existential quantifier is that there must be at least one entity (in the model) which can be substituted for the bound variable ‘x’ in e.g. (16d) to produce a true proposition. The interpretation of the universal quantifier (∀) is that every entity (in the model) can be substituted for the var
                                                                                                                                        1. To interpret a formula containing a variable bound by a quantifier (in a model) we need a value assignment function which assigns values to variables in formulas. For FOL, we want a function which assigns an entity to variables over entities. Now we can talk about the truth of a proposition in a model given some value assignment to its variables.
                                                                                                                              2. See semantic truth algebra doc
                                                                                                                        2. Syntax
                                                                                                                          1. As the ambiguous examples above made clear, syntax affects interpretation because syntactic ambiguity leads to semantic ambiguity. For this reason semantic rules must be sensitive to syntactic structure. Most semantic theories pair syntactic and semantic rules so that the application of a syntactic rule automnatically leads to the application of a semantic rule. So if two or more syntactic rules can be applied at some point, it follows that a sentence will be semantically ambiguous.
                                                                                                                            1. Pairing syntactic and semantic rules and guiding the application of semantic rules on the basis of the syntactic analysis of the sentence also leads naturally to an explanation of semantic productivity, because if the syntactic rule system is recursive and finite, so will the semantic rule system be too. This organisation of grammar incorporates the principle that the meaning of a sentence (its propositional content) will be a productive, rule-governed combination of the meaning of its constituents. So to get the meaning of a sentence we combine words, syntactically and semantically to form phrases, phrases to form clauses, and so on. This is known as the Principle of Compositionality. If language is not compositional in this way, then we cannot explain semantic productivity.
                                                                                                                    2. Lakoff (1970) defines natural logic as a goal (not a system): pragmatic. ○ to characterize valid patterns of reasoning via surface forms (syntactic forms as close as possible to natural language) ○ without translation to formal notation: → ¬ ∧ ∨ ∀ ∃
                                                                                                                      1. Precise, yet sidesteps difficulties of translating to FOL: ○ idioms, intensionality and propositional attitudes, modalities, indexicals, reciprocals, scope ambiguities, quantifiers such as most, reciprocals, anaphoric adjectives, temporal and causal relations, aspect, unselective quantifiers, adverbs of quantification, donkey sentences, generic determiners, …
                                                                                                                        1. The subsumption principle 10 ● Deleting modifiers & other content (usually) preserves truth ● Inserting new content (usually) does not ● Many approximate approaches to RTE exploit this heuristic ○ Try to match each word or phrase in H to something in P ○ Punish examples which introduce new content in H
                                                                                                                          1. Upward monotonicity 11 ● Actually, there’s a more general principle at work ● Edits which broaden or weaken usually preserve truth My cat ate a rat ⇒ My cat ate a rodent My cat ate a rat ⇒ My cat consumed a rat My cat ate a rat this morning ⇒ My cat ate a rat today My cat ate a fat rat ⇒ My cat ate a rat ● Edits which narrow or strengthen usually do not My cat ate a rat ⇏ My cat ate a Norway rat My cat ate a rat ⇏ My cat ate a rat with cute little whiskers My cat ate a rat last week ⇏ My cat ate a rat last Tuesday
                                                                                                                            1. Semantic containment 12 ● There are many different ways to broaden meaning! ● Deleting modifiers, qualifiers, adjuncts, appositives, etc.: tall girl standing by the pool ⊏ tall girl ⊏ girl ● Generalizing instances or classes into superclasses: Einstein ⊏ a physicist ⊏ a scientist ● Spatial & temporal broadening: in Palo Alto ⊏ in California, this month ⊏ this year ● Relaxing modals: must ⊏ could, definitely ⊏ probably ⊏ maybe ● Relaxing quantifiers: six ⊏ several ⊏ some ● Dropping conjuncts, adding disjuncts: danced and sang ⊏ sang ⊏ hummed or sang
                                                                                                                              1. Monotonicity calculus (Sánchez Valencia 1991) 15 ● Entailment as semantic containment: rat ⊏ rodent, eat ⊏ consume, this morning ⊏ today, most ⊏ some ● Monotonicity classes for semantic functions ○ Upward monotone: some rats dream ⊏ some rodents dream ○ Downward monotone: no rats dream ⊐ no rodents dream ○ Non-monotone: most rats dream # most rodents dream ● But lacks any representation of exclusion (negation, antonymy, …)
                                                                                                                                1. Semantic exclusion 17 ● Monotonicity calculus deals only with semantic containment ● It has nothing to say about semantic exclusion ● E.g., negation (exhaustive exclusion) slept ^ didn’t sleep able ^ unable living ^ nonliving sometimes ^ never ● E.g., alternation (non-exhaustive exclusion) cat | dog male | female
                                                                                                                                  1. My research agenda, 2007-09 18 ● Build on the monotonicity calculus of Sánchez Valencia ● Extend it from semantic containment to semantic exclusion ● Join chains of semantic containment and exclusion relations ● Apply the system to the task of natural language inference. elternation, negation, forward entailment, analysis of sentences
                                                                                                                                    1. Motivation recap 19 ● To get precise reasoning without full semantic interpretation P. Every firm surveyed saw costs grow more than expected, even after adjusting for inflation. H. Every big company in the poll reported cost increases. yes ● Approximate methods fail due to lack of precision ○ Subsumption principle fails — every is downward monotone ● Logical methods founder on representational difficulties ○ Full semantic interpretation is difficult, unreliable, expensive ○ How to translate more than expected (etc.) to first-order logic? ● Natural logic lets us reason without full interpretation ○ Often, we can drop whole clauses without analyzing them
                                                                                                                              2. Downward monotonicity 13 ● Certain context elements can reverse this heuristic! ● Most obviously, negation My cat did not eat a rat ⇐ My cat did not eat a rodent ● But also many other negative or restrictive expressions! No cats ate rats ⇐ No cats ate rodents Every rat fears my cat ⇐ Every rodent fears my cat My cat ate at most three rats ⇐ My cat ate at most three rodents
                                                                                                                                1. Projectivity (= monoticity++) ● How do the entailments of a compound expression depend on the entailments of its parts? ● How does the semantic relation between (f x) and (f y) depend on the semantic relation between x and y (and the properties of f)? ● Monotonicity gives a partial answer (for ≡ , ⊏, ⊐, #) ● But what about the other relations (^, |, ‿)? ● We’ll categorize semantic functions based on how they project the basic semantic relations
                                                                                                                                  1. see: causality in complex systems eg stoachastic in other part of web
                                                                                                                              3. We propose a method for mapping natural language to first-order logic representations capable of capturing the meanings of function words such as every, not and or, but which also uses distributional statistics to model the meaning of content words. Our approach differs from standard formal semantics in that the non-logical symbols used in the logical form are cluster identifiers. Where standard semantic formalisms would map the verb write to a write’ symbol, we map it to a cluster identifier such as relation37, which the noun author may also map to. This mapping is learnt by offline clustering. Unlike previous distributional approaches, we perform clustering at the level of predicate-argument structure, rather than syntactic dependency structure. This means that we abstract away from many syntactic differences that are not present in the semantics, such as conjunctions, passives, relative clauses, and long-range dependencies. This significantly reduces sparsity, so we have fewer predi

                                                                                                                                Anmerkungen:

                                                                                                                                • https://www.aclweb.org/anthology/Q13-1015.pdf
                                                                                                                                1. cates to cluster and more observations for each.
                                                                                                                            2. combine: semantic algebra, quantification of linguistics/semantics, complex multilayer systems and LoAs with info, stats, logic, alnguage analysis, narrative predicition modeled after the real world
                                                                                                                              1. What natural logic can’t do ● Not a universal solution for natural language inference ● Many types of inference not amenable to natural logic ○ Paraphrase: Eve was let go ≡ Eve lost her job ○ Verb/frame alternation: he drained the oil ⊏ the oil drained ○ Relation extraction: Aho, a trader at UBS… ⊏ Aho works for UBS ○ Common-sense reasoning: the sink overflowed ⊏ the floor got wet ○ etc. ● Also, has a weaker proof theory than FOL ○ Can’t explain, e.g., de Morgan’s laws for quantifiers: ○ Not all birds fly ≡ Some birds don’t fly
                                                                                                                      2. Logic as “science of reasoning” Stoics put focus on propositional logic reasoning. Aristotele studies the relations holding between the structure of the premises and of the conclusion. Frege introduces quantifiers symbols and a way to represent sentences with more than one quantifiers. Tarski provides the model theoretical interpretation of Frege quantifiers and hence a way to deal with entailment involving more complex sentences that those studied by the Greek. Montague by studying the syntax-semantics relation of linguistic structure provides the framework for building FOL representation of natural language sentences and hence natural language reasoning. Lambek calculus captures the algebraic principles behind the syntax-semantic interface of linguistic structure and has been implemented. How can such results be used in real life applications? Have we captured natural language reasoning? C

                                                                                                                        Anmerkungen:

                                                                                                                        • http://disi.unitn.it/~bernardi/RSISE11/Slides/lecture4.pdf
                                                                                                                        1. Formal Semantics & DB access Query answering over ontology Take a reasoning system used to query a DB by exploiting an ontology: O,DB |= q the DB provides the references and their properties (A-Box of description logic), the ontologies the general knowledge (T-Box). Formal Semantics allows the development of natural language interface to such systems: • It allows domain experts to enter their knowledge in natural language sentences to build the ontology; • It allows users to query the DB with natural language questions. Good application: the DB provides entities and FS the meaning representation based on such entities.
                                                                                                                          1. Research questions and Examples Research Can we get the users use natural language in a free way or do we need to control their use? Which controlled language? Develop a real case scenario. Example of systems designed for writing unambiguous and precise specifications text: • PENG: http://web.science.mq.edu.au/˜rolfs/peng/ • ACE: http://attempto.ifi.uzh.ch/site/ • See Ian Pratt papers on fragments of English. More at: http://sites.google.com/site/controllednaturallanguage/
                                                                                                                    3. semantic objects: https://www.researchgate.net/publication/326851450_Logical_Semantics_and_Commonsense_Knowledge_Where_Did_we_Go_Wrong_and_How_to_Go_Forward_Again
                                                                                                                3. Dependent on relations b/w objects
                                                                                                                  1. s. In complex systems, interactions are often specifc. Not all elements, only certain pairs or groups of elements, interact with each other. Networks are used to keep track of which elements interact with others in a complex system
                                                                                                                    1. flows: a property. exist over a network of nodes and connectors. nodes=agents and connectors designate possible interactions
                                                                                                                      1. multiplier effects:occurs when ones injects additional resource at a node. passed from node to node and psosibly changing along the way
                                                                                                                        1. recylcing effects: effect of cycles in networks. ie can increase outut
                                                                                                                          1. diversity: persitence of agent depends on context provided by other agents. each agents fills a niche that is defined by the interactions centering on that agents, ie if you take it out th other agents go to fill the hole. convergence occurs among agents. spread of agent can ope a new niche which becom eopportunities for other agents to exploit and they modify for them.
                                                                                                                  2. The same in system and out
                                                                                                                  3. Idea: 'why' operaator, similar to if, then, and etc. If and then are contained in why.
                                                                                                                    1. Quan and qual
                                                                                                                      1. quan: can test the predictions
                                                                                                                        1. The aim of statistical mechanics is to understand the macroscopic properties of a system on the basis of a statistical description of its microscopic components. The idea behind it is to link the microscopic world of components with the macroscopic properties of the aggregate system. An essential concept that makes this link possible is Boltzmann–Gibbs entropy. A system is often prepared in a macrostate, which means that aggregate properties like the temperature or pressure of a gas are known. There are typically many possible microstates that are associated with that macrostate. A microstate is a possible microscopic confguration of a system.
                                                                                                                          1. for stats: https://polymer.bu.edu/hes/book-thurner18.pdf
                                                                                                                            1. putting equations within logical semantic analysis?
                                                                                                                          2. https://sci-hub.se/https://doi.org/10.1080/00207720410001734183
                                                                                                                          3. two important components have been developed in the social sciences and play a crucial role in the theory of complex systems: • Multilayer interaction networks. In social systems, interactions happen simultaneously at more or less the same strength scale on a multitude of superimposed interaction networks. Social scientists, in particular sociologists, have recognized the importance of social networks3 since the 1970s [156, 397]. • Game theory. Another contribution from the social sciences, game theory is a concept that allows us to determine the outcome of rational interactions between agents trying to optimize their payoff or utility [392]. Each agent is aware that the other agent is rational and that he/she also knows that the other agent is rational. Before computers arrived on the scene, game theory was one of the very few methods of dealing with complex systems. Game theory can easily be transferred to dynamical situations, and it was believed for a long time that iterative gamet
                                                                                                                            1. Figure 1.4 Two schematic representations of the same multilayer network. Nodes are characterized by a two-dimensional state vector. The frst component is given by colours (light- and dark-grey) the second by shapes (circles, squares). Nodes interact through three types of interaction that are represented by (full, broken, and dotted) lines.(a) Shows the projection of the multilayer network to a single layer,whereas in (b) each type of link is shown in a different layer. The system is complex if states simultaneously change as a function of the interaction network and if interactions change as a function of the states; see Equation 1.1. The multilayer network could represent a network of banks at a given moment, where shapes represent the wealth of the bank and the links could represent fnancial assets that connect banks with each other. A full line could mean a credit relation; a broken line could represent derivatives trading, and dotted lines indicate if one bank owns shares in anoth

                                                                                                                              Anmerkungen:

                                                                                                                              • https://polymer.bu.edu/hes/book-thurner18.pdf
                                                                                                                              1. The types of link can be friendship, family ties, processes of good exchange, payments, trust, communication, enmity, and so on. Every type of link is represented by a separate network layer; see, for example, Figure 1.4. Individuals interact through a superposition of these different interaction types (multilayer network), which happen simultaneously and are often of the same order of magnitude in ‘strength’. Often, networks at one level interact with networks at other levels. Networks that characterize social systems show a rich spectrum of growth patterns and a high level of plasticity. This plasticity of networks arises from restructuring processes through link creation, relinking, and link removal. Understanding and describing the underlying restructuring dynamics can be very challenging; however, there are a few typical and recurring dynamical patterns that allow for scientifc progress.We will discuss network dynamics in Chapter 4. Individuals are represented by states, such as t
                                                                                                                                1. 1. Complex systems are composed of many elements. These are labelled with latin indices i.
                                                                                                                                  1. 2. These elements interact with each other through one or more interaction types, labelled with greek indices α. Interactions are often specifc between elements. To keep track of which elements interact, we use networks. Interactions are represented as links in the interaction networks. The interacting elements are the nodes in these networks. Every interaction type can be seen as one network layer in a multilayer network; see Figure 1.4. A multilayer network is a collection of networks linking the same set of nodes. If these networks evolve independently, multilayer networks are superpositions of networks. However, there are often interactions between interaction layers.
                                                                                                                                    1. 3. Interactions are not static but change over time. We use the following notation to keep track of interactions in the system. The strength of an interaction of type α between two elements i and j at time t is denoted by, Mα ij(t) interaction strength. Interactions can be physical, chemical, social, or symbolic. Most interactions are mediated through some sort of exchange process between nodes. In that sense, interaction strength is often related to the quantity of objects exchanged (gauge bosons for physical interactions, electrons for chemical interactions, fnancial assets for economical interactions, bottles of wine for positive social interactions, and bullets for aggressive ones, etc.). Interactions can be deterministic or stochastic.
                                                                                                                                      1. 4. Elements are characterized by states. States can be scalar; if an element has various independent states, it will be described by a state vector or a state tensor. States are not static but evolve with time. We denote the state vectors by, σi(t) state vector. States can be the velocity of a planet, spin of an atom, state of phosphorylation of proteins, capitalization of a bank, or the political preference of a person. State changes can be deterministic or stochastic. They can be the result of an endogenous dynamics or of external driving.
                                                                                                                                        1. The interaction partners of a node in a network (or multilayer network) can be seen as the local ‘environment’ of that node. The environment often determines the future state of the node. In complex systems, interactions can change over time. For example, people establish new friendships or economic relations; countries terminate diplomatic relations. The state of nodes determines (fully or in part) the future state of the link, whether it exists in the future or not, and if it exists, the strength and the direction that it will have. The essence of co-evolution can be encompassed in the statement: • The state of the network (topology and weights) determines the future states of the nodes. • The state of the nodes determines the future state of the links of the network. More formally, co-evolving multiplex networks can be written as, d dt σi(t) ∼ F # Mα ij(t),σj(t) $ and d dt Mα ij(t) ∼ G # Mβ ij(t),σj(t) $ . (1.1) Here, the derivatives mean ‘change within the next time step’ and shoul

                                                                                                                                          Anmerkungen:

                                                                                                                                          •  pg 36 https://polymer.bu.edu/hes/book-thurner18.pdf
                                                                                                                                          1. Here, the derivatives mean ‘change within the next time step’ and should not be confused with real derivatives. The frst equation means that the states of element i change as a ‘function’, F, that depends on the present states of σi and the present multilayer network states, Mα ij(t). The function F can be deterministic or stochastic and contains all summations over greek indices and all j. The frst equation depicts the analytical nature of physics that has characterized the past 300 years. Once one specifes F and the initial OUP UNCORRECTED PROOF – FIRST PROOF, 5/6/2018, SPi What are Complex Systems? 25 conditions say σi(t = 0), the solution of the equation provides us with the trajectories of the elements of the system. Note that in physics the interaction matrix Mα ij(t) represents the four forces. Usually, it only involves a single interaction type α, is static, and only depends on the distance between i and j. Typically, systems that can be described with the frst equation alone a
                                                                                                                                            1. The structure of Equations 1.1 is, of course, not the most general possible. Immediate generalizations would be to endow the multilayer networks with a second greek index, Mαβ ij , which would allow us to capture cross-layer interactions between elements. It is conceivable that elements and interactions are embedded in space and time; indices labelling the elements and interactions could carry such additional information, i(x,t,...) or {ij}αβ(x,t,...). Finally, one could introduce memory to the elements and interactions of the system.
                                                                                                                                        2. 5. Complex systems are characterized by the fact that states and interactions are often not independent but evolve together by mutually infuencing each other; states and interactions co-evolve. The way in which states and interactions are coupled can be deterministic or stochastic.
                                                                                                                                          1. 6. The dynamics of co-evolving multilayer networks is usually highly non-linear
                                                                                                                                            1. 7. Complex systems are context-dependent. Multilayer networks provide that context and thus offer the possibility of a self-consistent description of complex systems. To be more precise, for any dynamic process happening on a given network layer, the other layers represent the ‘context’ in the sense that they provide the only other ways in which elements in the initial layer can be infuenced.Multilayer networks sometimes allow complex systems to be interpreted as ‘closed systems’. Of course, they can be externally driven and usually are dissipative and nonHamiltonian. In that sense, complex systems are hard to describe analytically.
                                                                                                                                              1. 8. Complex systems are algorithmic. Their algorithmic nature is a direct consequence of the discrete interactions between interaction networks and states.
                                                                                                                                                1. 9. Complex systems are path-dependent and consequently often non-ergodic. Given that the network dynamics is suffciently slow, the networks in the various layers can be seen as a ‘memory’ that stores and records the recent dynamical past of the system
                                                                                                                                                  1. 10. Complex systems often have memory. Information about the past can be stored in nodes if they have a memory, or in the network structure of the various layers.
                                                                                                                                                  2. The failure of mathematical models to provide explicit solutions to complex phenomena
                                                                                                                                        3. Nestedness of members
                                                                                                                                          1. a set containing a chain of subsets, forming a hierarchical structure, like Russian dolls.
                                                                                                                                            1. Let B be a non-empty set and C be a collection of subsets of B. Then C is a nested set collection if:
                                                                                                                                              1. The first condition states that set B, which contains all the elements of any subset, must belong to the Nested Set Collection. Some authors[1] also state that B is not empty or the empty is not a subset of C.
                                                                                                                                                1. The second condiction states the intersection of every couple of sets in the Nested Set Collection is not the empty-set only if one set is a subset of the other
                                                                                                                                                  1. A nested hierarchy or inclusion hierarchy is a hierarchical ordering of nested sets.
                                                                                                                                                    1. Matryoshkas represent a nested hierarchy where each level contains only one object, i.e., there is only one of each size of doll; a generalized nested hierarchy allows for multiple objects within levels but with each object having only one parent at each level.
                                                                                                                                                      1. A containment hierarchy is a direct extrapolation of the nested hierarchy concept. All of the ordered sets are still nested, but every set must be "strict" — no two sets can be identical. The shapes example above can be modified to demonstrate this:
                                                                                                                                                        1. Nests are within systems
                                                                                                                                                          1. Core ideas: 1. A whole functions by virtue of its parts 2.an entity is greater than the sum of its parts 3.Can be anything physical, biological, psychological, sociological, or symbolic 4.An entity can be static, mechanical, self regulating, or interactive with environment 5. An entity has a hierarchy or heterarchy
                                                                                                                                                            1. Het: flexible and adaptable patterns for context/situation
                                                                                                                                                              1. Hier: Fixed predictable patterns. A group organized by rank, creates order within a system.
                                                                                                                                                                1. Scales: exist like LoAs. Immediate scale - home/ work immediate area Proximal scale - entire home, school, entire office, family Community scale - local gov. transportation, religious group Societal scale - public policies, widely held beliefs.
                                                                                                                                                                  1. Views of systems themselves and objects: (keep separate) Focus, Basic assumptions, Ability/ Disability, Postulates of Change
                                                                                                                                                                    1. models are wrong: make the system fit the linguistic nature of the data. All models are wrong, but some are useful. 2.3 Parsimony Since all models are wrong the scientist cannot obtain a "correct" one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity. 2.4 Worrying Selectively Since all models are wrong the scientist must be alert to what is importantly wrong. I
                                                                                                                                                                      1. . all models are approximations. Essentially, all models are wrong, but some are useful. However, the approximate nature of the model must always be borne in mind.... : make a system for ranking approximation. there never was, or ever will be, an exactly normal distribution or an exact linear relationship. Nevertheless, enormous progress has been made by entertaining such fictions and using them as approximations. the question you need to ask is not "Is the model true?" (it never is) but "Is the model good enough for this particular application?"
                                                                                                                                                                        1. For each model we may believe that its predictive power is an indication of its being at least approximately true. But if both models are successful in making predictions, and yet mutually inconsistent, how can they both be true? Let us consider a simple illustration. Two observers are looking at a physical object. One may report seeing a circular disc, and the other may report seeing a rectangle. Both will be correct, but one will be looking at the object (a cylindrical can) from above and the other will be observing from the side. The two models represent different aspects of the same reality.
                                                                                                                                                                          1. the math: see link

                                                                                                                                                                            Anmerkungen:

                                                                                                                                                                            • http://www.philos.rug.nl/~romeyn/paper/2012_wit_et_al_-_all_models_are_wrong.pdf
                                                                                                                                                                            1. "truth … is much too complicated to allow anything but approximations".
                                                                                                                                                                              1. model validation is the task of confirming that the outputs of a statistical model have enough fidelity to the outputs of the data-generating process that the objectives of the investigation can be achieved. Model validation can be based on two types of data: data that was used in the construction of the model and data that was not used in the construction. Validation based on the first type usually involves analyzing the goodness of fit of the model or analyzing whether the residuals seem to be random (i.e. residual diagnostics). Validation based on the second type usually involves analyzing whether the model's predictive performance deteriorates non-negligibly when applied to pertinent new data.
                                                                                                                                                                                1. When doing a validation, there are three notable causes of potential difficulty, according to the Encyclopedia of Statistical Sciences.[3] The three causes are these: lack of data; lack of control of the input variables; uncertainty about the underlying probability distributions and correlations. The usual methods for dealing with difficulties in validation include the following: checking the assumptions made in constructing the model; examining the available data and related model outputs; applying expert judgment.[1] Note that expert judgment commonly requires expertise in the application area
                                                                                                                                                                                  1. Identifiability analysis is a group of methods found in mathematical statistics that are used to determine how well the parameters of a model are estimated by the quantity and quality of experimental data.[1] Therefore, these methods explore not only identifiability of a model, but also the relation of the model to particular experimental data or, more generally, the data collection process.
                                                                                                                                                                                    1. Assuming a model is fit to experimental data, the goodness of fit does not reveal how reliable the parameter estimates are. The goodness of fit is also not sufficient to prove the model was chosen correctly. For example, if the experimental data is noisy or if there is an insufficient amount of data points, it could be that the estimated parameter values could vary drastically without significantly influencing the goodness of fit. To address this issues the identifiability analysis could be applied as an important step to ensure correct choice of model, and sufficient amount of experimental data. The purpose of this analysis is either a quantified proof of correct model choice and integrality of experimental data acquired or such analysis can serve as an instrument for the detection of non-identifiable and sloppy parameters, helping planning the experiments and in building and improvement of the model at the early stages.
                                                                                                                                                                                      1. tructural identifiability analysis is a particular type of analysis in which the model structure itself is investigated for non-identifiability. Recognized non-identifiabilities may be removed analytically through substitution of the non-identifiable parameters with their combinations or by another way. The model overloading with number of independent parameters after its application to simulate finite experimental dataset may provide the good fit to experimental data by the price of making fitting results not sensible to the changes of parameters values, therefore leaving parameter values undetermined. Structural methods are also referred to as a priori, because non-identifiability analysis in this case could also be performed prior to the calculation of the fitting score functions, by exploring the number degrees of freedom (statistics) for the model and the number of independent experimental conditions to be varied. Practical identifiability analysis can be performed by exploring t
                                                                                                                                                                                        1. analysis of variables and data within system. see: curve fitting, sensitvity analysis,estimation theory, idetifibility, parameter identification, regression analysis
                                                                                                                                                                                          1. model is identifiable if it is theoretically possible to learn the true values of this model's underlying parameters after obtaining an infinite number of observations from it. Mathematically, this is equivalent to saying that different values of the parameters must generate different probability distributions of the observable variables. Usually the model is identifiable only under certain technical restrictions, in which case the set of these requirements is called the identification conditions. A model that fails to be identifiable is said to be non-identifiable or unidentifiable: two or more parametrizations are observationally equivalent. In some cases, even though a model is non-identifiable, it is still possible to learn the true values of a certain subset of the model parameters. In this case we say that the model is partially identifiable. In other cases it may be possible to learn the location of the true parameter up to a certain finite region of the parameter space, in wh
                                                                                                                                                                                            1. The field of system identification uses statistical methods to build mathematical models of dynamical systems from measured data.[1] System identification also includes the optimal design of experiments for efficiently generating informative data for fitting such models as well as model reduction. A common approach is to start from measurements of the behavior of the system and the external influences (inputs to the system) and try to determine a mathematical relation between them without going into many details of what is actually happening inside the system; this approach is called system identification. ne of the many possible applications of system identification is in control systems.
                                                                                                                                                                                              1. One could build a so-called white-box model based on first principles, e.g. a model for a physical process from the Newton equations, but in many cases such models will be overly complex and possibly even impossible to obtain in reasonable time due to the complex nature of many systems and processes. A much more common approach is therefore to start from measurements of the behavior of the system and the external influences (inputs to the system) and try to determine a mathematical relation between them without going into the details of what is actually happening inside the system. This approach is called system identification. Two types of models are common in the field of system identification: grey box model: although the peculiarities of what is going on inside the system are not entirely known, a certain model based on both insight into the system and experimental data is constructed. This model does however still have a number of unknown free parameters which can be estimated u
                                                                                                                                                                                            2. nferences are said to possess internal validity if a causal relationship between two variables is properly demonstrated.[1][2] A valid causal inference may be made when three criteria are satisfied: the "cause" precedes the "effect" in time (temporal precedence), the "cause" and the "effect" tend to occur together (covariation), and there are no plausible alternative explanations for the observed covariation (nonspuriousness).[2]
                                                                                                                                                                                        2. The input for the data sets used in the visual analytics process are heterogeneous data sources (i.e., the internet, newspapers, books, scientific experiments, expert systems). From these rich sources, the data sets S = S1, ..., Sm are chosen, whereas each Si , i ∈ (1, ..., m) consists of attributes Ai1, ..., Aik. The goal or output of the process is insight I. Insight is either directly obtained from the set of created visualizations V or through confirmation of hypotheses H as the results of automated analysis methods. This formalization of the visual analytics process is illustrated in the following figure. Arrows represent the transitions from one set to another one. More formally the visual analytics process is a transformation F: S → I, whereas F is a concatenation of functions f ∈ {DW, VX, HY, UZ} defined as follows:

                                                                                                                                                                                          Anmerkungen:

                                                                                                                                                                                          • https://en.wikipedia.org/wiki/Visual_analytics
                                                                                                                                                                                  2. bonini paradox: As a model of a complex system becomes more complete, it becomes less understandable. Alternatively, as a model grows more realistic, it also becomes just as difficult to understand as the real-world processes it represents".[3]
                                                                                                                                                                                    1. Borges describes a further conundrum of when the map is contained within the territory, you are led into infinite regress:
                                                                                                                                                                                      1. The inventions of philosophy are no less fantastic than those of art: Josiah Royce, in the first volume of his work The World and the Individual (1899), has formulated the following: 'Let us imagine that a portion of the soil of England has been levelled off perfectly and that on it a cartographer traces a map of England. The job is perfect; there is no detail of the soil of England, no matter how minute, that is not registered on the map; everything has there its correspondence. This map, in such a case, should contain a map of the map, which should contain a map of the map of the map, and so on to infinity.
                                                                                                                                                                                        1. Baudrillard argues in Simulacra and Simulation (1994, p. 1): Today abstraction is no longer that of the map, the double, the mirror, or the concept. Simulation is no longer that of a territory, a referential being or substance. It is the generation by models of a real without origin or reality: A hyperreal. The territory no longer precedes the map, nor does it survive it. It is nevertheless the map that precedes the territory—precession of simulacra—that engenders the territory. The philosopher David Schmidtz draws on this distinction in his book Elements of Justice, apparently deriving it from Wittgenstein's private language argument.
                                                                                                                                                                                          1. emic and etic refer to two kinds of field research done and viewpoints obtained:[1] emic, from within the social group (from the perspective of the subject) and etic, from outside (from the perspective of the observer).
                                                                                                                                                                                          2. Mimesis (/mɪˈmiːsɪs, mə-, maɪ-, -əs/;[1] Ancient Greek: μίμησις mīmēsis, from μιμεῖσθαι mīmeisthai, "to imitate", from μῖμος mimos, "imitator, actor") is a term used in literary criticism and philosophy that carries a wide range of meanings which include imitatio, imitation, nonsensuous similarity, receptivity, representation, mimicry, the act of expression, the act of resembling, and the presentation of the self.[2] In ancient Greece, mimesis was an idea that governed the creation of works of art, in particular, with correspondence to the physical world understood as a model for beauty, truth, and the good. Plato contrasted mimesis, or imitation, with diegesis, or narrative.
                                                                                                                                                                        2. "Systems ontology", which is concerned "with what is meant by "system" and how systems are realized at various levels of the world of observation."Systems paradigms", which is concerned with developing worldviews which "takes [humankind] as one species of concrete and actual system, embedded in encompassing natural hierarchies of likewise concrete and actual physical, biological, and social systems. "Systems axiology", which is concerned with developing models of systems that involve "humanistic concerns", and views "symbols, values, social entities and cultures" as "something very "real"" and having an "embeddedness in a cosmic order of hierarchies"
                                                                                                                                                                          1. Multiplicity of approporaite systems, within context: no grand scheme. Critical systems thinking is a systems thinking framework that wants to bring unity to the diversity of different systems approaches
                                                                                                                                                                            1. Living and compromising with plurality in general
                                                                                                                                                                              1. Boundary critique is a general systems thinking principle similar to concepts as multiple perspectives, and interconnectedness. Boundary critique according to Cabrera is "in a way identical to distinction making as both processes cause one to demarcate between what is in and what is out of a particular construct. Boundary critique may also allude to how one must be explicit (e.g., critical) of these boundary decisions. Distinction making, on the other hand, is autonomic—one constantly makes distinctions all of the time.
                                                                                                                                                                                1. Self-reflective boundary relating to the question "What are my boundary judgements?". Dialogical boundary relating to the question "Can we agree on our boundary judgements?". Controversial boundary relating to the question "Don't you claim too much?".
                                                                                                                                                                                  1. "we can conceive of no radical separation between forming and being formed, and between substance and space and time…the universe is conceived as a continuum [in which] spatio-temporal events disclose themselves as "stresses" or "tensions" within the constitutive matrix…the cosmic matrix evolves in patterned flows…some flows hit upon configurations of intrinsic stability and thus survive, despite changes in their evolving environment…these we call systems."[50] In this way Ervin Laszlo accommodated the intrinsic continuity of the cosmos understood as a plenum while insisting that it contained real systems whose properties emerge from the inherent dynamics of the universe. There are levels that are objective, but the suvjective system of the system itself should be made too.
                                                                                                                                                                                    1. So 1. make system 2. make self analysis system 2. remake system on levels of abs as needed
                                                                                                                                                                                  2. This perspective is grounded in the recognition that values have to be overtly taken into account in a realistic ie useful systems paradigm
                                                                                                                                                                                    1. But also avoid too much pluralism: elucidating the ontological foundations of values and normative intuitions, so as to incorporate values into Laszlo's model of the natural systems in a way that is holistic. Support universal values
                                                                                                                                                                            2. Connections b/w objects
                                                                                                                                                                              1. Postulate: theoretical statement which suggests how two or more concepts are related
                                                                                                                                                                                1. Study functions and parts in isolation, together, on diff levels, then solve issues with these prinicples
                                                                                                                                                                                  1. Imagine a network: what is its structure? what is its function? are all networks the same or are they different? if they are the same, why and how? do networks change or remain the same over time? what is the relation of parts to the whole and vice versa? when did the network emerge? where? who made it? where can it observed? what is it? what is its function? simple or complex? relation to other forms?
                                                                                                                                                                                    1. Complexity: moment of new pattern of coherence and structure
                                                                                                                                                                                      1. Strange loops: self reflexive circuits where meaning becomes undecidable . Signification is self reflexive. Self reflexivity creates a system, where new patterns emerge with each loop. Self-organizing: self subsistent parts establish an external order. Does man only encounter himself upon feeling threatened by existence of objects? Usefulness determined by how much self consciousnessness penetrates it (expand later) - degrees in LoAs - and has the certainty of its individual self. The self awareness enjoys itself in and through otherness. origin of other is the penetration of the essence.
                                                                                                                                                                                        1. A system is final when the parts of the system do not point to any external telos but are their own end or purpose - the purpose is itself, inner telology. . So if relations are the organisms, we must analyse the structure of complex relations within the system to understand it. (loa of relations?)External purpose: machines, internal purpose: defines organisms. The differences b/w objects are formed in a unified whole - part and whole sustains each other. Life is nothing apart from its embodiment in indiviual organisms, and individual organisms need the large system-processes. This is selforganization. We are self perciveing but not indepenemetn of the world - we are open systems
                                                                                                                                                                                          1. Openness comes via necessity of an environemnt, but it is closed on one level because the environemt is nested within the agent. Environment is a differential network made of groups of systems. History must be an open system to be understood
                                                                                                                                                                                            1. Ability to process and use what is undecided (open) and random. Complexity is the state between the rigid structure and the vanishing structure. An object is random if it is not the shortest version of itself.
                                                                                                                                                                                              1. Complex sys are many interconnected parts connected in mulptiple way. Components can interact serially and in parallel to generate sequential and simulationous effects and events. They are spontaneous in self organization, and complicate internality adn externality until it is undecidable. Structure s from self orgnization eerge from but are not reducible to the itneractivity of the system membes
                                                                                                                                                                                                1. Networks
                                                                                                                                                                                                  1. Structure is a discrete set of nodes and a specific set of connectors b/w nodes
                                                                                                                                                                                                    1. Dynamics of nodal variables. dynamics are conntrolled by connection strength and input output rules of each member
                                                                                                                                                                                                      1. It learns and avolves: ie via schemas. present data: -> previous data (behavior and effects -> unfolding: (schema that summarizes can can predict -> descrioption, prediction, hebaior -> consequences (real world) -> selective effect on viability of schema and on competition among schemas , Agent selects patterns from input and changes internal model. The schemata are in competitiona long with variables. Systems are bound thru feedback loops to the environment.
                                                                                                                                                                                                        1. This sounds like realism: tigers exist and they are anything tha To sum up: 1. We are pursuing a realist account of patterns in the physical world. 2. It is noted that some patterns allow for an enormous simpli cation of our model of realitywhile trading o relatively little predictive power. 36 3. Identifying patterns via their predictive utility is suggestive of an instrumentalist and anti-realist approach. How can a pattern's predictive utility be given ontological weight? The beginnings of a theory to answer the latter question were given by Dennett (11) and re ned by Ross (41) and Ladyman (27) resulting in an ontological account of patterns called Rainforest Realism . The main thesis is that since computation is a physical process there is a determinate matter of fact about whether a pattern is predictively useful, namely, if it is possible to build a computer to accurately simulate the phenomena in question by means of said pattern, and if doing so is much more computationally e
                                                                                                                                                                                                          1. and if doing so is much more computationally ecient than operating at a lower level and ignoring the pattern. Since computation is a physical process, it is the laws of physics which determine whether such and such a computation can occur and hence whether a given pattern is real. Crutch eld and Shalizi prove in Section 5 of (42) that -machines are the unique, optimal predictors of the phenomena they are meant to model.
                                                                                                                                                                                                    2. Types of systems:
                                                                                                                                                                                                      1. Cartogram
                                                                                                                                                                                                        1. cladogram
                                                                                                                                                                                                          1. concept mapping
                                                                                                                                                                                                            1. dendrogram
                                                                                                                                                                                                              1. info vis reference model
                                                                                                                                                                                                                1. graph drawing
                                                                                                                                                                                                                  1. heatmap
                                                                                                                                                                                                                    1. hyperbolictree
                                                                                                                                                                                                                      1. mulitdimensional scaling
                                                                                                                                                                                                                        1. parallel coordinates
                                                                                                                                                                                                                          1. problem solving envoronemtns
                                                                                                                                                                                                                            1. treemapping
                                                                                                                                                                                                                              1. product space localization
                                                                                                                                                                                                                                1. logical models
                                                                                                                                                                                                                                  1. time series
                                                                                                                                                                                                                                    1. LoA
                                                                                                                                                                                                                                      1. triadic models/meta modeling

                                                                                                                                                                                                                                        Anmerkungen:

                                                                                                                                                                                                                                        • research https://www.researchgate.net/publication/332043414_Assessment_of_meta-modeling_knowledge_Learning_from_triadic_concepts_of_models_in_the_philosophy_of_science
                                                                                                                                                                                                                                        1. meta-modeling knowledge is an established construct in science education, typically conceptualized in frameworks encompassing hierarchically ordered levels of understanding specific for dierent aspects (e.g., purpose of models, testing models, changing models). This study critically discusses the appropriateness of assessments based on such frameworks taking into account triadic concepts of models in the philosophy of science. Empirically, secondary school students’ (N=359) responses to modeling tasks are analyzed. In the tasks, the modeling-purpose is not subject of the assessment, but intentionally provided. The findings show that students’ expressed level of understanding significantly depend on both the modeling-purpose and the modeling-context introduced in the tasks. Implications for science education are discussed
                                                                                                                                                                                                                                          1. The process of scientific modeling involves the selection of parts or variables of a system which are considered to be important in a given context and, therefore, should be incorporated in a model (Bailer-Jones, 2003; Giere et al., 2006). Therefore, no model is complete or totally accurate, but pragmatically useful
                                                                                                                                                                                                                                            1. It depends on the modeler’s (the cognitive agent’s; Giere, 2010) intention, that is the purpose of modeling, which specific features a model will or should have. Therefore, Bailer-Jones (2003) states that ‘whether a model is suitable or not can be only decided once the model’s function is taken into account’ (pp. 70-71). According to these thoughts, most contemporary concepts of models in science are called ‘(at least) triadic’ and include, next to the model and the thing or process which is modelled (i.e. ‘dyadic’), an in-tentionally modeling cognitive agent (Knuuttila, 2005); for example: ‘Model M is an entity used by agent A to represent target system S for pur-pose P’
                                                                                                                                                                                                                                              1. 2) The modeling-purpose should be taken into account to meaningfully judge a given model’s appropriateness (Bailer-Jones, 2003). For instance, it is quite reasonable to compare a model with what is already known about a phenomenon (e.g., testing models, level II; Table 01) when the modeling-purpose is knowledge representation.3) Basically, the notion of hierarchically ordered levels of meta-modeling knowledge suggests, at least implicitly, a higher educational value of views described in higher levels, which stays in contrast to the various purposes and pragmatic approaches of modeling in science (Bailer-Jones, 2003; Odenbaugh, 2005)
                                                                                                                                                                                                                                          2. multiagent
                                                                                                                                                                                                                                            1. MASON (Multi-Agent Simulator of Networks and Neighborhoods) is a –  general-purpose, –  single-process, – discrete-event simulation library for building diverse multiagent models across –  the social and computational sciences (AI, robotics), –  ranging from 3D continuous models, –  to social complexity networks, – to discretized foraging algorithms based on evolutionary computation (EC) π  Design principles: – intersection (not union) of needs –  “additive” approach –  divorced visualization – Checkpointing –  EC
                                                                                                                                                                                                                                              1. DYNAMICS: 1.  Rain (blue) occurs constantly, causing food (green) to grow. 2.  Agents move seeking food, which consumes energy and makes them wet. 3.  IF an agent gets too wet it will seek shelter (brown sites) until dry enough to go out to eat again again. 4.  Agents share information (or “mind-read”?) on food and shelter location only with agents of same culture that they encounter nearby. 5. There is no rule to behave collectively.
                                                                                                                                                                                                                                                1. Centrality: –  A stratification measure –  How to measure "power" π  Does success depend on local or distal connections? π  Does success depend on the power/centrality of other actors/ vertices to which a focal vertex is connected? π  Do resources “flow through” intermediary nodes, so that indirect relationships become proxies for direct ones? π  Or is centrality more in the way of an indicator of exchange opportunities/bargaining power? π  What are the “rules of the game” as regards the activation of multiple relationships?
                                                                                                                                                                                                                                                  1. Degree Centrality : number of distinct relationships or links a node has – CD(i) = Σ xij ;j =1,N and i /= j – CD(i) = CD(i)/(N-1) normalized value –  Differentiate by "in" and "out" connections based on which way power & influence flow π Betweenness Centrality : measures control or broker ability of a node –  Assume this "process" is efficient because it occurs along geodesic paths –  Maximum "betweenness" is for an intermediary node in a star network π Closeness : who can reach others via few intermediaries are relatively independent/autonomous of others - Intermediaries serve as attenuators and filters
                                                                                                                                                                                                                                                    1. Eigenvector" centrality: –  There exist multiple central nodes in the network –  The centrality of a vertex depends on having strong ties to other central vertices –  "Status" rises through strong affiliations with high-status others – Compute: ei = f(Σ rijej; j = 1, N) where ei is the eigenvector centrality measure and rij is the strength of the relationship between i and j (sometimes thought of as j’s dependence on i ) π  What else can we measure? –  Lots of different measures –  Weighted, directional graphs to measure "flow of influence" – Borgatti /Everett partition networks into "core" and "periphery" graphs connected by key nodes
                                                                                                                                                                                                                                              2. Graph theory
                                                                                                                                                                                                                                                1. iates Graph Theory Primer - I π  Social network data consists of binary social relations, of which there are many kinds (role-based, affective, cognitive, flows, etc.) –  Mathematically, social networks can be represented as graphs or matrices. π A graph is defined as a set of nodes and a set of lines that connect the nodes, written mathematically as G=(V,E) or G(V,E). π The nodes in a graph represent persons (or animals, organizations, cities, countries, etc) and the edges (lines) represent relationships among them. – The line between persons a and b is represented mathematically like this: (a,b ). – The graph here contains these edges: (a,b), (a,e), (b,d), (a,c), and (d,c ). – A subgraph of a graph is a subset of its points together with all the lines connecting members of the subset. (subgraph = {a, b,c,d }) π  The degree of a point is defined as the number of lines incident upon that node. degree(a) = 3
                                                                                                                                                                                                                                                  1. ates Graph Theory Primer - III π In a directed graph, a point has both indegree and outdegree : – The outdegree is the number of arcs from that point to other points. – The indegree is the number of arcs coming in to the point from other points. π A path is an alternating sequence of points and lines, beginning at a point and ending at a point, and which does not visit any point more than once. –  Two paths are point-disjoint if they don't share any nodes. –  Two paths are edge-disjoint if they don't share any edges. –  A walk is a path with no restriction on the number of times a point can be visited. –  A cycle is a path except that it starts and ends at the same point. – The length of a path is defined as the number of edges in it. – The shortest path between two points is called a geodesic .
                                                                                                                                                                                                                                                    1. In graph theory, an Eulerian trail (or Eulerian path) is a trail in a finite graph that visits every edge exactly once (allowing for revisiting vertices). Similarly, an Eulerian circuit or Eulerian cycle is an Eulerian trail that starts and ends on the same vertex. They were first discussed by Leonhard Euler while solving the famous Seven Bridges of Königsberg problem in 1736. The problem can be stated mathematically like this: Given the graph in the image, is it possible to construct a path (or a cycle; i.e., a path starting and ending on the same vertex) that visits each edge exactly once?
                                                                                                                                                                                                                                                      1. Subgraphs, induced subgraphs, and minors A common problem, called the subgraph isomorphism problem, is finding a fixed graph as a subgraph in a given graph. One reason to be interested in such a question is that many graph properties are hereditary for subgraphs, which means that a graph has the property if and only if all subgraphs have it too. Unfortunately, finding maximal subgraphs of a certain kind is often an NP-complete problem.
                                                                                                                                                                                                                                                        1. Many problems and theorems in graph theory have to do with various ways of coloring graphs. Typically, one is interested in coloring a graph so that no two adjacent vertices have the same color, or with other similar restrictions. One may also consider coloring edges (possibly so that no two coincident edges are the same color), or other variations.
                                                                                                                                                                                                                                                          1. route problems, network flow, covering problems,
                                                                                                                                                                                                                                                        2. types of graphs: https://en.wikipedia.org/wiki/Forbidden_graph_characterization
                                                                                                                                                                                                                                                        3. In one restricted but very common sense of the term,[1][2] a graph is an ordered pair G = (V, E) comprising: V a set of vertices (also called nodes or points); E ⊆ {{x, y} | (x, y) ∈ V2 ∧ x ≠ y} a set of edges (also called links or lines), which are unordered pairs of vertices (i.e., an edge is associated with two distinct vertices).

                                                                                                                                                                                                                                                          Anmerkungen:

                                                                                                                                                                                                                                                          • https://en.wikipedia.org/wiki/Graph_theory
                                                                                                                                                                                                                                                          1. https://en.wikipedia.org/wiki/Logic_of_graphs
                                                                                                                                                                                                                                                            1. Graph-theoretic methods, in various forms, have proven particularly useful in linguistics, since natural language often lends itself well to discrete structure. Traditionally, syntax and compositional semantics follow tree-based structures, whose expressive power lies in the principle of compositionality, modeled in a hierarchical graph. More contemporary approaches such as head-driven phrase structure grammar model the syntax of natural language using typed feature structures, which are directed acyclic graphs. Within lexical semantics, especially as applied to computers, modeling word meaning is easier when a given word is understood in terms of related words; semantic networks are therefore important in computational linguistics. Still, other methods in phonology (e.g. optimality theory, which uses lattice graphs) and morphology (e.g. finite-state morphology, using finite-state transducers) are common in the analysis of language as a graph
                                                                                                                                                                                                                                                          2. Taxonomy
                                                                                                                                                                                                                                                            1. Visual analytics
                                                                                                                                                                                                                                                              1. argument map. semantic mapping?
                                                                                                                                                                                                                                                                1. morphological anylsis
                                                                                                                                                                                                                                                                  1. cladistics
                                                                                                                                                                                                                                                                    1. concept map
                                                                                                                                                                                                                                                                      1. decision tree
                                                                                                                                                                                                                                                                        1. hypter text
                                                                                                                                                                                                                                                                          1. concept lattice
                                                                                                                                                                                                                                                                            1. layered graph drawing
                                                                                                                                                                                                                                                                              1. radial tree
                                                                                                                                                                                                                                                                                1. sociogram
                                                                                                                                                                                                                                                                                  1. topic map
                                                                                                                                                                                                                                                                                    1. timeline
                                                                                                                                                                                                                                                                                      1. tree structure
                                                                                                                                                                                                                                                                                        1. entity-relationship model
                                                                                                                                                                                                                                                                                          1. geovisualization
                                                                                                                                                                                                                                                                                            1. olog: combine ologs and dynamic complex systenms
                                                                                                                                                                                                                                                                                              1. problem structuring
                                                                                                                                                                                                                                                                                                1. multidimsneional scaling
                                                                                                                                                                                                                                                                                                  1. hyperbolic tree
                                                                                                                                                                                                                                                                                                    1. graph drawing
                                                                                                                                                                                                                                                                                                      1. euler graphs
                                                                                                                                                                                                                                                                                                        1. exisnteital graphs
                                                                                                                                                                                                                                                                                                          1. timing diamgra
                                                                                                                                                                                                                                                                                                            1. component diagram
                                                                                                                                                                                                                                                                                                              1. Ladder logic
                                                                                                                                                                                                                                                                                                                1. abstract cladograms
                                                                                                                                                                                                                                                                                                                  1. activity diagram
                                                                                                                                                                                                                                                                                                                    1. comparison diagram
                                                                                                                                                                                                                                                                                                                      1. Flow diagram
                                                                                                                                                                                                                                                                                                                        1. Narrative systems
                                                                                                                                                                                                                                                                                                                          1. Network diagram
                                                                                                                                                                                                                                                                                                                            1. Data flow diagram
                                                                                                                                                                                                                                                                                                                              1. Phase diagram
                                                                                                                                                                                                                                                                                                                                1. State diagram
                                                                                                                                                                                                                                                                                                                                  1. Block diagram
                                                                                                                                                                                                                                                                                                                                    1. Use case diagrams
                                                                                                                                                                                                                                                                                                                                      1. Deployment diagram
                                                                                                                                                                                                                                                                                                                                        1. Composite structure
                                                                                                                                                                                                                                                                                                                                          1. Interaction overview
                                                                                                                                                                                                                                                                                                                                            1. Ishikawa
                                                                                                                                                                                                                                                                                                                                              1. Circles of casuality:
                                                                                                                                                                                                                                                                                                                                                1. domain model
                                                                                                                                                                                                                                                                                                                                                  1. category ytheory
                                                                                                                                                                                                                                                                                                                                                    1. space mapping
                                                                                                                                                                                                                                                                                                                                                      1. ladder of abstraction
                                                                                                                                                                                                                                                                                                                                                        1. matrix
                                                                                                                                                                                                                                                                                                                                                          1. ontology engineering
                                                                                                                                                                                                                                                                                                                                                            1. networks
                                                                                                                                                                                                                                                                                                                                                              1. A semantic network, or frame network is a knowledge base that represents semantic relations between concepts in a network. This is often used as a form of knowledge representation. It is a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent semantic relations between concepts,[1] mapping or connecting semantic fields.
                                                                                                                                                                                                                                                                                                                                                                1. semantic network is used when one has knowledge that is best understood as a set of concepts that are related to one another. Most semantic networks are cognitively based. They also consist of arcs and nodes which can be organized into a taxonomic hierarchy. Semantic networks contributed ideas of spreading activation, inheritance, and nodes as proto-objects.
                                                                                                                                                                                                                                                                                                                                                                  1. Content in a complex network can spread via two major methods: conserved spread and non-conserved spread.[36] In conserved spread, the total amount of content that enters a complex network remains constant as it passes through.
                                                                                                                                                                                                                                                                                                                                                                    1. percolation: removal of nodes
                                                                                                                                                                                                                                                                                                                                                                      1. partion: separate
                                                                                                                                                                                                                                                                                                                                                                        1. have a topology
                                                                                                                                                                                                                                                                                                                                                                          1. In the context of network theory, a complex network is a graph (network) with non-trivial topological features—features that do not occur in simple networks such as lattices or random graphs but often occur in graphs modelling of real systems.
                                                                                                                                                                                                                                                                                                                                                                            1. ecently, the study of complex networks has been expanded to networks of networks.[9] If those networks are interdependent, they become significantly more vulnerable to random failures and targeted attacks and exhibit cascading failures and first-order percolation transitions.[10]
                                                                                                                                                                                                                                                                                                                                                                              1. published the first small-world network model, which through a single parameter smoothly interpolates between a random graph and a lattice.[7] Their model demonstrated that with the addition of only a small number of long-range links, a regular graph, in which the diameter is proportional to the size of the network, can be transformed into a "small world" in which the average number of edges between any two vertices is very small (mathematically, it should grow as the logarithm of the size of the network), while the clustering coefficient stays large. It is known that a wide variety of abstract graphs exhibit the small-world property, e.g., random graphs and scale-free networks. Further, real world networks such as the World Wide Web and the metabolic network also exhibit this property.
                                                                                                                                                                                                                                                                                                                                                                                1. Trophic coherence is a property of directed graphs (or directed networks).[1] It is based on the concept of trophic levels used mainly in ecology,[2] but which can be defined for directed networks in general and provides a measure of hierarchical structure among nodes. Trophic coherence is the tendency of nodes to fall into well-defined trophic levels. It has been related to several structural and dynamical properties of directed networks, including the prevalence of cycles[3] and network motifs,[4] ecological stability,[1] intervality,[5] and spreading processes like epidemics and neuronal avalanches.[6]
                                                                                                                                                                                                                                                                                                                                                                                  1. There is as yet little understanding of the mechanisms which might lead to particular kinds of networks becoming significantly coherent or incoherent.[3] However, in systems which present correlations between trophic level and other features of nodes, processes which tended to favour the creation of edges between nodes with particular characteristics could induce coherence or incoherence. In the case of food webs, predators tend to specialise on consuming prey with certain biological properties (such as size, speed or behaviour) which correlate with their diet, and hence with trophic level. This has been suggested as the reason for food-web coherence.[1] However, food-web models based on a niche axis do not reproduce realistic trophic coherence,[1] which may mean either that this explanation is insufficient, or that several niche dimensions need to be considered
                                                                                                                                                                                                                                                                                                                                                                                    1. The relation between trophic level and node function can be seen in networks other than food webs. The figure shows a word adjacency network derived from the book Green Eggs and Ham, by Dr Seuss.[3] The height of nodes represents their trophic levels (according here to the edge direction which is the opposite of that suggested by the arrows, which indicate the order in which words are concatenated in sentences). The syntactic function of words is also shown with node colour. There is a clear relationship between syntactic function and trophic level: the mean trophic level of common nouns (blue) is {\displaystyle s_{noun}=1.4\pm 1.2}{\displaystyle s_{noun}=1.4\pm 1.2}, whereas that of verbs (red) is {\displaystyle s_{verb}=7.0\pm 2.7}{\displaystyle s_{verb}=7.0\pm 2.7}. This example illustrates how tropic coherence or incoherence might emerge from node function, and also that the trophic structure of networks provides a means of identifying node function in certain systems.
                                                                                                                                                                                                                                                                                                                                                                                  2. Network formation is an aspect of network science that seeks to model how a network evolves by identifying which factors affect its structure and how these mechanisms operate. Network formation hypotheses are tested by using either a dynamic model with an increasing network size or by making an agent-based model to determine which network structure is the equilibrium in a fixed-size network.
                                                                                                                                                                                                                                                                                                                                                                                    1. begins as a small network or even a single node. The modeler then uses a (usually randomized) rule on how newly arrived nodes form links in order to increase the size of the network. The aim is to determine what the properties the network will be when it grows in size. In this way, researchers try to reproduce properties common in most real networks, such as the small world network property or the scale-free network property.
                                                                                                                                                                                                                                                                                                                                                                                2. Community structure
                                                                                                                                                                                                                                                                                                                                                                                  1. a network is said to have community structure if the nodes of the network can be easily grouped into (potentially overlapping) sets of nodes such that each set of nodes is densely connected internally. In the particular case of non-overlapping community finding, this implies that the network divides naturally into groups of nodes with dense connections internally and sparser connections between groups. But overlapping communities are also allowed. The more general definition is based on the principle that pairs of nodes are more likely to be connected if they are both members of the same community(ies), and less likely to be connected if they do not share communities. A related but different problem is community search, where the goal is to find a community that a certain vertex belongs to.
                                                                                                                                                                                                                                                                                                                                                                                    1. number of different characteristics have been found to occur commonly, including the small-world property, heavy-tailed degree distributions, and clustering, among others. Another common characteristic is community structure.[1][2][3][4][5] In the context of networks, community structure refers to the occurrence of groups of nodes in a network that are more densely connected internally than with the rest of the network, as shown in the example image to the right. This inhomogeneity of connections suggests that the network has certain natural divisions within it.
                                                                                                                                                                                                                                                                                                                                                                                      1. Community structures are quite common in real networks. Social networks include community groups (the origin of the term, in fact) based on common location, interests, occupation, etc.[5][6] Finding an underlying community structure in a network, if it exists, is important for a number of reasons. Communities allow us to create a large scale map of a network since individual communities act like meta-nodes in the network which makes its study easier.[7]
                                                                                                                                                                                                                                                                                                                                                                                        1. finding communities within network: minimum cut method , hierarchial clustering and measuring similarity at a threshold, algorithims, modularity maximization, statistical inference, cliques, porbablity of connections within separate groups
                                                                                                                                                                                                                                                                                                                                                                                          1. hierarchies: A hierarchy (from the Greek ἱεραρχία hierarkhia, "rule of a high priest", from hierarkhes, "president of sacred rites") is an arrangement of items (objects, names, values, categories, etc.) in which the items are represented as being "above", "below", or "at the same level as" one another. Hierarchy is an important concept in a wide variety of fields, such as philosophy, mathematics, computer science, organizational theory, systems theory, and the social sciences (especially political philosophy). A hierarchy can link entities either directly or indirectly, and either vertically or diagonally. The only direct links in a hierarchy, insofar as they are hierarchical, are to one's immediate superior or to one of one's subordinates, although a system that is largely hierarchical can also incorporate alternative hierarchies. Hierarchical links can extend "vertically" upwards or downwards via multiple links in the same direction, following a path. All parts of the hierarchy that
                                                                                                                                                                                                                                                                                                                                                                                            1. Many grammatical theories, such as phrase-structure grammar, involve hierarchy. Direct–inverse languages such as Cree and Mapudungun distinguish subject and object on verbs not by different subject and object markers, but via a hierarchy of persons. In this system, the three (or four with Algonquian languages) persons are placed in a hierarchy of salience. To distinguish which is subject and which object, inverse markers are used if the object outranks the subject. On the other hand, languages include a variety of phenomena that are not hierarchical. For example, the relationship between a pronoun and a prior noun phrase to which it refers, commonly crosses grammatical boundaries in non-hierarchical ways.
                                                                                                                                                                                                                                                                                                                                                                                              1. logical: Mathematically, in its most general form, a hierarchy is a partially ordered set or poset.[9] The system in this case is the entire poset, which is constituted of elements. Within this system, each element shares a particular unambiguous property. Objects with the same property value are grouped together, and each of those resulting levels is referred to as a class. "Hierarchy" is particularly used to refer to a poset in which the classes are organized in terms of increasing complexity. Operations such as addition, subtraction, multiplication and division are often performed in a certain sequence or order. Usually, addition and subtraction are performed after multiplication and division has already been applied to a problem. The use of parenthesis is also a representation of hierarchy, for they show which operation is to be done prior to the following ones. For example: (2 + 5) × (7 - 4). In this problem, typically one would multiply 5 by 7 first, based on the rules of mathe
                                                                                                                                                                                                                                                                                                                                                                                                1. matical hierarchy. But when the parentheses are placed, one will know to do the operations within the parentheses first before continuing on with the problem. These rules are largely dominant in algebraic problems, ones that include several steps to solve. The use of hierarchy in mathematics is beneficial to quickly and efficiently solve a problem without having to go through the process of slowly dissecting the problem. Most of these rules are now known as the proper way into solving certain equations.
                                                                                                                                                                                                                                                                                                                                                                                                  1. subtypes of hierarchies:
                                                                                                                                                                                                                                                                                                                                                                                                    1. A nested hierarchy or inclusion hierarchy is a hierarchical ordering of nested sets.[10] The concept of nesting is exemplified in Russian matryoshka dolls. Each doll is encompassed by another doll, all the way to the outer doll. The outer doll holds all of the inner dolls, the next outer doll holds all the remaining inner dolls, and so on. Matryoshkas represent a nested hierarchy where each level contains only one object, i.e., there is only one of each size of doll; a generalized nested hierarchy allows for multiple objects within levels but with each object having only one parent at each level.
                                                                                                                                                                                                                                                                                                                                                                                                      1. A containment hierarchy is a direct extrapolation of the nested hierarchy concept. All of the ordered sets are still nested, but every set must be "strict"—no two sets can be identical. Two types of containment hierarchies are the subsumptive containment hierarchy and the compositional containment hierarchy. A subsumptive hierarchy "subsumes" its children, and a compositional hierarchy is "composed" of its children. A hierarchy can also be both subsumptive (general to specific)and compositional (the ordering of parts)
                                                                                                                                                                                                                                                                                                                                                                                      2. Syntax trees
                                                                                                                                                                                                                                                                                                                                                                                        1. In linguistics, branching refers to the shape of the parse trees that represent the structure of sentences.[1] Assuming that the language is being written or transcribed from left to right, parse trees that grow down and to the right are right-branching, and parse trees that grow down and to the left are left-branching. The direction of branching reflects the position of heads in phrases, and in this regard, right-branching structures are head-initial, whereas left-branching structures are head-final.[2] English has both right-branching (head-initial) and left-branching (head-final) structures, although it is more right-branching than left-branching.
                                                                                                                                                                                                                                                                                                                                                                                          1. A pathfinder network is a psychometric scaling method based on graph theory and used in the study of expertise, knowledge acquisition, knowledge engineering, scientific citation patterns, information retrieval, and data visualization. Pathfinder networks are potentially applicable to any problem addressed by network theory. Contents 1 Overview 2 Example 3 Algorithm 4 References 5 External links
                                                                                                                                                                                                                                                                                                                                                                                            1. Several psychometric scaling methods start from proximity data and yield structures revealing the underlying organization of the data. Data clustering and multidimensional scaling are two such methods. Network scaling represents another method based on graph theory. Pathfinder networks are derived from proximities for pairs of entities. Proximities can be obtained from similarities, correlations, distances, conditional probabilities, or any other measure of the relationships among entities. The entities are often concepts of some sort, but they can be anything with a pattern of relationships. In the pathfinder network, the entities correspond to the nodes of the generated network, and the links in the network are determined by the patterns of proximities. For example, if the proximities are similarities, links will generally connect nodes of high similarity. The links in the network will be undirected if the proximities are symmetrical for every pair of entities. Symmetrical proximit
                                                                                                                                                                                                                                                                                                                                                                                              1. The pathfinder algorithm uses two parameters. The q parameter constrains the number of indirect proximities examined in generating the network. The q parameter is an integer value between 2 and n − 1, inclusive where n is the number of nodes or items. The r parameter defines the metric used for computing the distance of paths (cf. the Minkowski distance). The r parameter is a real number between 1 and infinity, inclusive. With ordinal-scale data (see level of measurement), the r-parameter should be infinity because the same PFnet would result from any positive monotonic transformation of the proximity data. Other values of r require data measured on a ratio scale. The q parameter can be varied to yield the desired number of links in the network. Essentially, pathfinder networks preserve the shortest possible paths given the data so links are eliminated when they are not on shortest paths.
                                                                                                                                                                                                                                                                                                                                                                                            2. pathfinder networks
                                                                                                                                                                                                                                                                                                                                                                                              1. Sequential dynamical system
                                                                                                                                                                                                                                                                                                                                                                                                1. Sequential dynamical systems (SDSs) are a class of graph dynamical systems. They are discrete dynamical systems which generalize many aspects of for example classical cellular automata, and they provide a framework for studying asynchronous processes over graphs. The analysis of SDSs uses techniques from combinatorics, abstract algebra, graph theory, dynamical systems and probability theory.
                                                                                                                                                                                                                                                                                                                                                                                                  1. An SDS is constructed from the following components: A finite graph Y with vertex set v[Y] = {1,2, ... , n}. Depending on the context the graph can be directed or undirected. A state xv for each vertex i of Y taken from a finite set K. The system state is the n-tuple x = (x1, x2, ... , xn), and x[i] is the tuple consisting of the states associated to the vertices in the 1-neighborhood of i in Y (in some fixed order). A vertex function fi for each vertex i. The vertex function maps the state of vertex i at time t to the vertex state at time t + 1 based on the states associated to the 1-neighborhood of i in Y. A word w = (w1, w2, ... , wm) over v[Y].
                                                                                                                                                                                                                                                                                                                                                                                                    1. A Dynamic Bayesian Network (DBN) is a Bayesian network (BN) which relates variables to each other over adjacent time steps. This is often called a Two-Timeslice BN (2TBN) because it says that at any point in time T, the value of a variable can be calculated from the internal regressors and the immediate prior value (time T-1)
                                                                                                                                                                                                                                                                                                                                                                                        2. A spatial network (sometimes also geometric graph) is a graph in which the vertices or edges are spatial elements associated with geometric objects, i.e. the nodes are located in a space equipped with a certain metric.[1][2] The simplest mathematical realization is a lattice or a random geometric graph, where nodes are distributed uniformly at random over a two-dimensional plane; a pair of nodes are connected if the Euclidean distance is smaller than a given neighborhood radius.
                                                                                                                                                                                                                                                                                                                                                                                          1. Lattice networks (see Fig. 1) are useful models for spatial embedded networks. Many physical phenomenas have been studied on these structures. Examples include Ising model for spontaneous magnetization,[5] diffusion phenomena modeled as random walks[6] and percolation.[7] Recently to model the resilience of interdependent infrastructures which are spatially embedded a model of interdependent lattice networks was introduced (see Fig. 2) and analyzed[
                                                                                                                                                                                                                                                                                                                                                                                            1. Network dynamics is a research field for the study of networks whose status changes in time. The dynamics may refer to the structure of connections of the units of a network,[1][2] to the collective internal state of the network,[3] or both. The networked systems could be from the fields of biology, chemistry, physics, sociology, economics, computer science, etc. Networked systems are typically characterized as complex systems consisting of many units coupled by specific, potentially changing, interaction topologies.
                                                                                                                                                                                                                                                                                                                                                                                              1. 4 categories of connectedness: Clique/Complete Graph: a completely connected network, where all nodes are connected to every other node. These networks are symmetric in that all nodes have in-links and out-links from all others. Giant Component: A single connected component which contains most of the nodes in the network. Weakly Connected Component: A collection of nodes in which there exists a path from any node to any other, ignoring directionality of the edges. Strongly Connected Component: A collection of nodes in which there exists a directed path from any node to any other.
                                                                                                                                                                                                                                                                                                                                                                                                1. Centrality indices produce rankings which seek to identify the most important nodes in a network model. Different centrality indices encode different contexts for the word "importance." The betweenness centrality, for example, considers a node highly important if it form bridges between many other nodes. The eigenvalue centrality, in contrast, considers a node highly important if many other highly important nodes link to it. Hundreds of such measures have been proposed in the literature. Centrality indices are only accurate for identifying the most central nodes. The measures are seldom, if ever, meaningful for the remainder of network nodes. [5] [6] Also, their indications are only accurate within their assumed context for importance, and tend to "get it wrong" for other contexts.[7]
                                                                                                                                                                                                                                                                                                                                                                                                  1. Limitations to centrality measures have led to the development of more general measures. Two examples are the accessibility, which uses the diversity of random walks to measure how accessible the rest of the network is from a given start node,[14] and the expected force, derived from the expected value of the force of infection generated by a node.[5] Both of these measures can be meaningfully computed from the structure of the network alone. Network models
                                                                                                                                                                                                                                                                                                                                                                                                    1. probabilites b/w models, ie barabasi albert and watts-strogatz, mediation driven attachemtn, fitness moden, percolation
                                                                                                                                                                                                                                                                                                                                                                                                      1. In mathematics, higher category theory is the part of category theory at a higher order, which means that some equalities are replaced by explicit arrows in order to be able to explicitly study the structure behind those equalities.
                                                                                                                                                                                                                                                                                                                                                                                                    2. Degree centrality of a node in a network is the number of links (vertices) incident on the node. Closeness centrality determines how "close" a node is to other nodes in a network by measuring the sum of the shortest distances (geodesic paths) between that node and all other nodes in the network. Betweenness centrality determines the relative importance of a node by measuring the amount of traffic flowing through that node to other nodes in the network. This is done by measuring the fraction of paths connecting all pairs of nodes and containing the node of interest. Group Betweenness centrality measures the amount of traffic flowing through a group of nodes.[39] Eigenvector centrality is a more sophisticated version of degree centrality where the centrality of a node not only depends on the number of links incident on the node but also the quality of those links. This quality factor is determined by the eigenvectors of the adjacency matrix of the network. Katz centrality of a node is me
                                                                                                                                                                                                                                                                                                                                                                                                  2. Core periphery structure is a network theory model based on Immanuel Wallerstein's world-systems theory.[citation needed] He formulated the chart in the 1980s.[ There are two main intuitions behind the definition of core–periphery network structures; one assumes that a network can only have one core, whereas the other allows for the possibility of multiple cores. These two intuitive conceptions serve as the basis for two modes of core–periphery structures.
                                                                                                                                                                                                                                                                                                                                                                                                    1. continous model: This model allows for the existence of three or more partitions of node classes. However, including more classes makes modifications to the discrete model more difficult.[clarification needed] Borgatti & Everett (1999) suggest that, in order to overcome this problem, each node be assigned a measure of ‘coreness’ that will determine its class. Nevertheless, the threshold of what constitutes a high ‘coreness’ value must be justified theoretically.
                                                                                                                                                                                                                                                                                                                                                                                                      1. discrete model:This model assumes that there are two classes of nodes. The first consists of a cohesive core sub-graph in which the nodes are highly interconnected, and the second is made up of a peripheral set of nodes that is loosely connected to the core. In an ideal core–periphery matrix, core nodes are adjacent to other core nodes and to some peripheral nodes while peripheral nodes are not connected with other peripheral nodes (Borgatti & Everett, 2000, p. 378). This requires, however, that there be an a priori partition that indicates whether a node belongs to the core or periphery.
                                                                                                                                                                                                                                                                                                                                                                                                      2. degrees of plurality?
                                                                                                                                                                                                                                                                                                                                                                                                  3. planar networks, the nature of embedded in space, voronoi tesselation regions, mixing space and topology
                                                                                                                                                                                                                                                                                                                                                                                            2. Bayesian network, Bayes network, belief network, decision network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of statistical model) that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. Similar ideas may be applied to undirected, and possibly cyclic, graphs such as Markov networks.
                                                                                                                                                                                                                                                                                                                                                                                          1. https://en.wikipedia.org/wiki/Issue-based_information_system
                                                                                                                                                                                                                                                                                                                                                                                            1. IBIS notation is used in issue mapping,[2]:ix an argument visualization technique closely related to argument mapping.[6] An issue map aims to comprehensively diagram the rhetorical structure of a conversation (or a series of conversations) as seen by the participants in the conversation, as opposed to an ideal conceptual structure such as, for example, a causal loop diagram, flowchart, or structure chart.[2]:264
                                                                                                                                                                                                                                                                                                                                                                                              1. Issue mapping is the basis of a meeting facilitation technique called dialogue mapping.[14][15] In dialogue mapping, a person called a facilitator uses IBIS notation to record a group conversation, while it is happening, on a "shared display" (usually a video projector). The facilitator listens to the conversation, and summarizes the ideas mentioned in the conversation on the shared display using IBIS notation, and if possible "validates" the map often by checking with the group to make sure each recorded element accurately represents the group's thinking.[15] Dialogue mapping, like a few other facilitation methods, has been called "nondirective" because it does not require participants or leaders to agree on an agenda or a problem definition.[16] Users of dialogue mapping have reported that dialogue mapping, under certain conditions, can improve the efficiency of meetings by reducing unnecessary redundancy and digressions in conversations, among other benefits.[15][17] A dialogue map
                                                                                                                                                                                                                                                                                                                                                                                      3. ovided that they have the same size (each matrix has the same number of rows and the same number of columns as the other), two matrices can be added or subtracted element by element (see conformable matrix). The rule for matrix multiplication, however, is that two matrices can be multiplied only when the number of columns in the first equals the number of rows in the second (i.e., the inner dimensions are the same, n for an (m×n)-matrix times an (n×p)-matrix, resulting in an (m×p)-matrix). There is no product the other way round, a first hint that matrix multiplication is not commutative. Any matrix can be multiplied element-wise by a scalar from its associated field. The individual items in an m×n matrix A, often denoted by ai,j, where i and j usually vary from 1 to m and n, respectively, are called its elements or entries.[4] For conveniently expressing an element of the results of matrix operations the indices of the element are often attached to the parenthesized or bracketed matr
                                                                                                                                                                                                                                                                                                                                                                                        1. expression; e.g., (AB)i,j refers to an element of a matrix product. In the context of abstract index notation this ambiguously refers also to the whole matrix product.
                                                                                                                                                                                                                                                                                                                                                                                          1. A major application of matrices is to represent linear transformations, that is, generalizations of linear functions such as f(x) = 4x. For example, the rotation of vectors in three-dimensional space is a linear transformation, which can be represented by a rotation matrix R: if v is a column vector (a matrix with only one column) describing the position of a point in space, the product Rv is a column vector describing the position of that point after a rotation. The product of two transformation matrices is a matrix that represents the composition of two transformations. Another application of matrices is in the solution of systems of linear equations. If the matrix is square, it is possible to deduce some of its properties by computing its determinant.
                                                                                                                                                                                                                                                                                                                                                                                            1. The adjacency matrix of a finite graph is a basic notion of graph theory.[80] It records which vertices of the graph are connected by an edge. Matrices containing just two different values (1 and 0 meaning for example "yes" and "no", respectively) are called logical matrices. The distance (or cost) matrix contains information about distances of the edges.[81] These concepts can be applied to websites connected by hyperlinks or cities connected by roads etc., in which case (unless the connection network is extremely dense) the matrices tend to be sparse, that is, contain few nonzero entries. Therefore, specifically tailored matrix algorithms can be used in network theory.
                                                                                                                                                                                                                                                                                                                                                                                            2. has basic operations for transformations
                                                                                                                                                                                                                                                                                                                                                                                        2. The model describes varying levels of abstraction (up) and concreteness (down) and helps describe our language and thoughts. The higher up the ladder you are the more abstract the idea, language or thought is. The lower you are on the ladder the more concrete the idea, language or thought is. You can also think about the ladder as scaling out (abstracting) and scaling back in (concrete). I often use the language: Let’s zoom out for second. Why is this connected to other projects?

                                                                                                                                                                                                                                                                                                                                                                                          Anmerkungen:

                                                                                                                                                                                                                                                                                                                                                                                          • https://medium.com/@tombarrett/up-and-down-the-ladder-of-abstraction-cb73533be751
                                                                                                                                                                                                                                                                                                                                                                                          1. modeling concurrent systens
                                                                                                                                                                                                                                                                                                                                                                                            1. n computer science, the process calculi (or process algebras) are a diverse family of related approaches for formally modelling concurrent systems. Process calculi provide a tool for the high-level description of interactions, communications, and synchronizations between a collection of independent agents or processes. They also provide algebraic laws that allow process descriptions to be manipulated and analyzed, and permit formal reasoning about equivalences between processes (e.g., using bisimulation).
                                                                                                                                                                                                                                                                                                                                                                                              1. Representing interactions between independent processes as communication (message-passing), rather than as modification of shared variables. Describing processes and systems using a small collection of primitives, and operators for combining those primitives. Defining algebraic laws for the process operators, which allow process expressions to be manipulated using equational reasoning.
                                                                                                                                                                                                                                                                                                                                                                                                1. To define a process calculus, one starts with a set of names (or channels) whose purpose is to provide means of communication. In many implementations, channels have rich internal structure to improve efficiency, but this is abstracted away in most theoretic models. In addition to names, one needs a means to form new processes from old ones. The basic operators, always present in some form or other, allow:[3] parallel composition of processes specification of which channels to use for sending and receiving data sequentialization of interactions hiding of interaction points recursion or process replication
                                                                                                                                                                                                                                                                                                                                                                                                  1. temporally order, reduce according th ethe tile on page

                                                                                                                                                                                                                                                                                                                                                                                                    Anmerkungen:

                                                                                                                                                                                                                                                                                                                                                                                                    • https://en.wikipedia.org/wiki/Process_calculus
                                                                                                                                                                                                                                                                                                                                                                                              2. more statistical graphs
                                                                                                                                                                                                                                                                                                                                                                                                1. https://visme.co/blog/types-of-graphs/
                                                                                                                                                                                                                                                                                                                                                                                          2. formalizes mathematical structure and its concepts in terms of a labeled directed graph called a category, whose nodes are called objects, and whose labelled directed edges are called arrows (or morphisms). A category has two basic properties: the ability to compose the arrows associatively, and the existence of an identity arrow for each object. The language of category theory has been used to formalize concepts of other high-level abstractions such as sets, rings, and groups. Informally, category theory is a general theory of functions.
                                                                                                                                                                                                                                                                                                                                                                                            1. A basic example of a category is the category of sets, where the objects are sets and the arrows are functions from one set to another. However, the objects of a category need not be sets, and the arrows need not be functions. Any way of formalising a mathematical concept such that it meets the basic conditions on the behaviour of objects and arrows is a valid category—and all the results of category theory apply to it. The "arrows" of category theory are often said to represent a process connecting two objects, or in many cases a "structure-preserving" transformation connecting two objects. There are, however, many applications where much more abstract concepts are represented by objects and morphisms. The most important property of the arrows is that they can be "composed", in other words, arranged in a sequence to form a new arrow.
                                                                                                                                                                                                                                                                                                                                                                                              1. A category is itself a type of mathematical structure, so we can look for "processes" which preserve this structure in some sense; such a process is called a functor.
                                                                                                                                                                                                                                                                                                                                                                                                1. Diagram chasing is a visual method of arguing with abstract "arrows" joined in diagrams. Functors are represented by arrows between categories, subject to specific defining commutativity conditions. Functors can define (construct) categorical diagrams and sequences (viz. Mitchell, 1965)[citation needed]. A functor associates to every object of one category an object of another category, and to every morphism in the first category a morphism in the second.
                                                                                                                                                                                                                                                                                                                                                                                                  1. A category C consists of the following three mathematical entities: a class of objects, binary operations, w associvity and identity.
                                                                                                                                                                                                                                                                                                                                                                                                    1. morphisms: relations a mong them depicted w commutitative diagrams with "points" (corners) representing objects and "arrows" representing morphisms.
                                                                                                                                                                                                                                                                                                                                                                                                      1. A morphism f : a → b is a: monomorphism (or monic) if f ∘ g1 = f ∘ g2 implies g1 = g2 for all morphisms g1, g2 : x → a. epimorphism (or epic) if g1 ∘ f = g2 ∘ f implies g1 = g2 for all morphisms g1, g2 : b → x.
                                                                                                                                                                                                                                                                                                                                                                                          3. a conceptual model of the domain[definition needed] that incorporates both behaviour and data.[1][2] In ontology engineering, a domain model is a formal representation of a knowledge domain with concepts, roles, datatypes, individuals, and rules, typically grounded in a description logic.
                                                                                                                                                                                                                                                                                                                                                                                            1. The model can then be used to solve problems related to that domain. The domain model is a representation of meaningful real-world concepts pertinent to the domain that need to be modeled in software.
                                                                                                                                                                                                                                                                                                                                                                                          4. In Systems Thinking, a foundational concept is called Circles of Causality. To define it simply, it is a term to describe that every event or happening in a system is a cause and an effect, to every action there is a reaction, to which there is a reaction. The understanding that every action causes a reaction forms the basis of Systems Thinking. The important thing to distinguish though is that the reactions of an action are not immediately obvious. It can be subtle. Over time these subtle reactions can cause very obvious negative or positive results. Circles of Causality are typically described as two types of feedback, Reinforcing and Balancing Feedback. These two types of feedback go into a System and can be used to understand its current state.
                                                                                                                                                                                                                                                                                                                                                                                          5. Ishikawa diagrams (also called fishbone diagrams, herringbone diagrams, cause-and-effect diagrams, or Fishikawa) are causal diagrams created by Kaoru Ishikawa that show the potential causes of a specific event.[1] Common uses of the Ishikawa diagram are product design and quality defect prevention to identify potential factors causing an overall effect. Each cause or reason for imperfection is a source of variation. Causes are usually grouped into major categories to identify and classify these sources of variation.
                                                                                                                                                                                                                                                                                                                                                                                            1. Advantages Highly visual brainstorming tool which can spark further examples of root causes Quickly identify if the root cause is found multiple times in the same or different causal tree Allows one to see all causes simultaneously Good visualization for presenting issues to stakeholders Disadvantages Complex defects might yield a lot of causes which might become visually cluttering Interrelationships between causes are not easily identifiable[6]
                                                                                                                                                                                                                                                                                                                                                                                              1. Root-cause analysis is intended to reveal key relationships among various variables, and the possible causes provide additional insight into process behavior. The causes emerge by analysis, often through brainstorming sessions, and are grouped into categories on the main branches off the fishbone. To help structure the approach, the categories are often selected from one of the common models shown below, but may emerge as something unique to the application in a specific case.
                                                                                                                                                                                                                                                                                                                                                                                          6. Interaction Overview Diagram is one of the fourteen types of diagrams of the Unified Modeling Language (UML), which can picture a control flow with nodes that can contain interaction diagrams.[1] The interaction overview diagram is similar to the activity diagram, in that both visualize a sequence of activities. The difference is that, for an interaction overview, each individual activity is pictured as a frame which can contain a nested interaction diagram. This makes the interaction overview diagram useful to "deconstruct a complex scenario that would otherwise require multiple if-then-else paths to be illustrated as a single sequence diagram".[2] The other notation elements for interaction overview diagrams are the same as for activity diagrams. These include initial, final, decision, merge, fork and join nodes. The two new elements in the interaction overview diagrams are the "interaction occurrences" and "interaction elements.
                                                                                                                                                                                                                                                                                                                                                                                          7. Composite structure diagram in the Unified Modeling Language (UML) is a type of static structure diagram, that shows the internal structure of a class and the collaborations that this structure makes possible. This diagram can include internal parts, ports through which the parts interact with each other or through which instances of the class interact with the parts and with the outside world, and connectors between parts or ports. A composite structure is a set of interconnected elements that collaborate at runtime to achieve some purpose. Each element has some defined role in the collaboration.
                                                                                                                                                                                                                                                                                                                                                                                        3. While a use case itself might drill into a lot of detail about every possibility, a use-case diagram can help provide a higher-level view of the system. It has been said before that "Use case diagrams are the blueprints for your system".[1] They provide the simplified and graphical representation of what the system must actually do. Due to their simplistic nature, use case diagrams can be a good communication tool for stakeholders. The drawings attempt to mimic the real world and provide a view for the stakeholder to understand how the system is going to be designed. Siau and Lee conducted research to determine if there was a valid situation for use case diagrams at all or if they were unnecessary. What was found was that the use case diagrams conveyed the intent of the system in a more simplified manner to stakeholders and that they were "interpreted more completely than class diagrams".[2] The purpose of the use case diagrams is simply to provide the high level view of the system a
                                                                                                                                                                                                                                                                                                                                                                                        4. A block diagram is a diagram of a system in which the principal parts or functions are represented by blocks connected by lines that show the relationships of the blocks.[1] They are heavily used in engineering in hardware design, electronic design, software design, and process flow diagrams. Block diagrams are typically used for higher level, less detailed descriptions that are intended to clarify overall concepts without concern for the details of implementation. Contrast this with the schematic diagrams and layout diagrams used in electrical engineering, which show the implementation details of electrical components and physical construction.
                                                                                                                                                                                                                                                                                                                                                                                        5. A state diagram is a type of diagram used in computer science and related fields to describe the behavior of systems. State diagrams require that the system described is composed of a finite number of states; sometimes, this is indeed the case, while at other times this is a reasonable abstraction. Many forms of state diagrams exist, which differ slightly and have different semantics.
                                                                                                                                                                                                                                                                                                                                                                                        6. A phase diagram in physical chemistry, engineering, mineralogy, and materials science is a type of chart used to show conditions (pressure, temperature, volume, etc.) at which thermodynamically distinct phases (such as solid, liquid or gaseous states) occur and coexist at equilibrium.
                                                                                                                                                                                                                                                                                                                                                                                        7. A data-flow diagram is a way of representing a flow of a data of a process or a system (usually an information system). The DFD also provides information about the outputs and inputs of each entity and the process itself. A data-flow diagram has no control flow, there are no decision rules and no loops. Specific operations based on the data can be represented by a flowchart.[1]
                                                                                                                                                                                                                                                                                                                                                                                      4. Interactive storytelling (also known as interactive drama) is a form of digital entertainment in which the storyline is not predetermined. The author creates the setting, characters, and situation which the narrative must address, but the user (also reader or player) experiences a unique story based on their interactions with the story world. The architecture of an interactive storytelling program includes a drama manager, user model, and agent model to control, respectively, aspects of narrative production, player uniqueness, and character knowledge and behavior.[1] Together, these systems generate characters that act "human," alter the world in real-time reactions to the player, and ensure that new narrative events unfold comprehensibly. cobmine w systems
                                                                                                                                                                                                                                                                                                                                                                                      5. Flow diagram is a collective term for a diagram representing a flow or set of dynamic relationships in a system. The term flow diagram is also used as a synonym for flowchart,[1] and sometimes as a counterpart of the flowchart.[2] Flow diagrams are used to structure and order a complex system, or to reveal the underlying structure of the elements and their interaction
                                                                                                                                                                                                                                                                                                                                                                                        1. types:
                                                                                                                                                                                                                                                                                                                                                                                          1. Alluvial diagram, highlights and summarizes the significant structural changes in networks
                                                                                                                                                                                                                                                                                                                                                                                            1. Control flow diagram, a diagram to describe the control flow of a business process, process or program
                                                                                                                                                                                                                                                                                                                                                                                              1. Sankey diagram, where line width represents magnitude
                                                                                                                                                                                                                                                                                                                                                                                                1. Signal-flow graph, in mathematics, a graphical means of showing the relations among the variables of a set of linear algebraic relations
                                                                                                                                                                                                                                                                                                                                                                                      6. Comparison diagram or comparative diagram is a general type of diagram, in which a comparison is made between two or more objects, phenomena or groups of data.[1] A comparison diagram or can offer qualitative and/or quantitative information. This type of diagram can also be called comparison chart or comparison chart. The diagram itself is sometimes referred to as a cluster diagram.
                                                                                                                                                                                                                                                                                                                                                                                        1. five basic types of comparison can be determined.[2] Comparison of components, for example the pieces of pie chart Item comparison, for example the bars in a bar chart Time-series comparison, for example the bars in a histogram or the curve of a line chart Frequency distribution comparison, for example the distribution in a histogram or line chart Correlation comparison, for example in a specific dot diagram
                                                                                                                                                                                                                                                                                                                                                                                      7. Function model From Wikipedia, the free encyclopedia Jump to navigationJump to search In systems engineering, software engineering, and computer science, a function model or functional model is a structured representation of the functions (activities, actions, processes, operations) within the modeled system or subject area.[1] Example of a function model of the process of "Maintain Reparable Spares" in IDEF0 notation. A function model, similar with the activity model or process model, is a graphical representation of an enterprise's function within a defined scope. The purposes of the function model are to describe the functions and processes, assist with discovery of information needs, help identify opportunities, and establish a basis for determining product and service costs.[2]
                                                                                                                                                                                                                                                                                                                                                                                        1. function model
                                                                                                                                                                                                                                                                                                                                                                                        2. Activity diagrams are graphical representations of workflows of stepwise activities and actions[1] with support for choice, iteration and concurrency. In the Unified Modeling Language, activity diagrams are intended to model both computational and organizational processes (i.e., workflows), as well as the data flows intersecting with the related activities.[2][3] Although activity diagrams primarily show the overall flow of control, they can also include elements showing the flow of data between activities through one or more data stores
                                                                                                                                                                                                                                                                                                                                                                                        3. ex of combining cladograms: https://academic.oup.com/sysbio/article-abstract/28/1/1/1655828
                                                                                                                                                                                                                                                                                                                                                                                        4. was originally a written method to document the design and construction of relay racks as used in manufacturing and process control.[1] Each device in the relay rack would be represented by a symbol on the ladder diagram with connections between those devices shown. In addition, other items external to the relay rack such as pumps, heaters, and so forth would also be shown on the ladder diagram. Ladder logic has evolved into a programming language that represents a program by a graphical diagram based on the circuit diagrams of relay logic hardware. Ladder logic is used to develop software for programmable logic controller
                                                                                                                                                                                                                                                                                                                                                                                      8. A digital timing diagram is a representation of a set of signals in the time domain. A timing diagram can contain many rows, usually one of them being the clock. It is a tool that is commonly used in digital electronics, hardware debugging, and digital communications. Besides providing an overall description of the timing relationships, the digital timing diagram can help find and diagnose digital logic hazards. Diagram convention Most timing diagrams use the following conventions: Higher value is a logic one Lower value is a logic zero A slot showing a high and low is an either or (such as on a data line) A Z indicates high impedance A greyed out slot is a don't-care or indeterminate.
                                                                                                                                                                                                                                                                                                                                                                                      9. An existential graph is a type of diagrammatic or visual notation for logical expressions, proposed by Charles Sanders Peirce, who wrote on graphical logic as early as 1882,[2] and continued to develop the method until his death in 1914.
                                                                                                                                                                                                                                                                                                                                                                                        1. alpha, isomorphic to sentential logic and the two-element Boolean algebra; beta, isomorphic to first-order logic with identity, with all formulas closed; gamma, (nearly) isomorphic to normal modal logic. Alpha nests in beta and gamma. Beta does not nest in gamma, quantified modal logic being more general than put forth by Peirce.
                                                                                                                                                                                                                                                                                                                                                                                          1. alpha syntax
                                                                                                                                                                                                                                                                                                                                                                                            1. semantics
                                                                                                                                                                                                                                                                                                                                                                                              1. The semantics are: The blank page denotes Truth; Letters, phrases, subgraphs, and entire graphs may be True or False; To enclose a subgraph with a cut is equivalent to logical negation or Boolean complementation. Hence an empty cut denotes False; All subgraphs within a given cut are tacitly conjoined.
                                                                                                                                                                                                                                                                                                                                                                                                1. ence the alpha graphs are a minimalist notation for sentential logic, grounded in the expressive adequacy of And and Not. The alpha graphs constitute a radical simplification of the two-element Boolean algebra and the truth functors. The depth of an object is the number of cuts that enclose it. Rules of inference: Insertion - Any subgraph may be inserted into an odd numbered depth. Erasure - Any subgraph in an even numbered depth may be erased. Rules of equivalence: Double cut - A pair of cuts with nothing between them may be drawn around any subgraph. Likewise two nested cuts with nothing between them may be erased. This rule is equivalent to Boolean involution. Iteration/Deiteration – To understand this rule, it is best to view a graph as a tree structure having nodes and ancestors.
                                                                                                                                                                                                                                                                                                                                                                                                  1. Beta Peirce notated predicates using intuitive English phrases; the standard notation of contemporary logic, capital Latin letters, may also be employed. A dot asserts the existence of some individual in the domain of discourse. Multiple instances of the same object are linked by a line, called the "line of identity". There are no literal variables or quantifiers in the sense of first-order logic. A line of identity connecting two or more predicates can be read as asserting that the predicates share a common variable. The presence of lines of identity requires modifying the alpha rules of Equivalence.
                                                                                                                                                                                                                                                                                                                                                                                                    1. Gamma Add to the syntax of alpha a second kind of simple closed curve, written using a dashed rather than a solid line. Peirce proposed rules for this second style of cut, which can be read as the primitive unary operator of modal logic. Zeman (1964) was the first to note that straightforward emendations of the gamma graph rules yield the well-known modal logics S4 and S5. Hence the gamma graphs can be read as a peculiar form of normal modal logic.
                                                                                                                                                                                                                                                                                                                                                                                                2. The syntax is: The blank page; Single letters or phrases written anywhere on the page; Any graph may be enclosed by a simple closed curve called a cut or sep. A cut can be empty. Cuts can nest and concatenate at will, but must never intersect. Any well-formed part of a graph is a subgraph.
                                                                                                                                                                                                                                                                                                                                                                                                  1. An entitative graph is an element of the diagrammatic syntax for logic that Charles Sanders Peirce developed under the name of qualitative logic beginning in the 1880s, taking the coverage of the formalism only as far as the propositional or sentential aspects of logic are concerned.
                                                                                                                                                                                                                                                                                                                                                                                            2. Peirce developed much of the two-element Boolean algebra, propositional calculus, quantification and the predicate calculus, and some rudimentary set theory. Model theorists consider Peirce the first of their kind. He also extended De Morgan's relation algebra.
                                                                                                                                                                                                                                                                                                                                                                                  2. The theory of ologs is an attempt to provide a rigorous mathematical framework for knowledge representation, construction of scientific models and data storage using category theory, linguistic and graphical tools
                                                                                                                                                                                                                                                                                                                                                                                    1. Mathematical formalism At the basic level an olog {\displaystyle {\mathcal {C}}}{\mathcal {C}} is a category whose objects are represented as boxes containing sentences and whose morphisms are represented as directed labeled arrows between boxes. The structures of the sentences for both the objects and the morphisms of {\displaystyle {\mathcal {C}}}{\mathcal {C}} need to be compatible with the mathematical definition of {\displaystyle {\mathcal {C}}}{\mathcal {C}}. This compatibility cannot be checked mathematically, because it lies in the correspondence between mathematical ideas and natural language. Every olog has a target category, which is taken to be {\displaystyle {\textbf {Set}}}{\displaystyle {\textbf {Set}}} (Category of sets), the category of sets and functions, unless otherwise mentioned. In that case, we are looking at a set of amino acids, a set of amine groups, and a function that assigns to every amino acid its amine group. In this article we usually stick to {\display
                                                                                                                                                                                                                                                                                                                                                                                      1. though sometimes using the Kleisli category {\displaystyle {\mathcal {C}}_{\mathbb {P} }}{\displaystyle {\mathcal {C}}_{\mathbb {P} }} of the power set monad. Another possibility, though one we do not use here, would be to use the Kleisli category of probability distributions—the Giry monad[2]—e.g., to obtain a generalization of Markov decision processes.
                                                                                                                                                                                                                                                                                                                                                                                      2. Spivak provides some rules of good practice for writing an olog whose morphisms have a functional nature (see the first example in the section Mathematical formalism).[1] The text in a box should adhere to the following rules: begin with the word "a" or "an". (Example: "an amino acid"). refer to a distinction made and recognizable by the olog's author. refer to a distinction for which there is well defined functor whose range is {\displaystyle {\textbf {Set}}}{\displaystyle {\textbf {Set}}}, i.e. an instance can be documented. (Example: there is a set of all amino acids). declare all variables in a compound structure. (Example: instead of writing in a box "a man and a woman" write "a man {\displaystyle m}m and a woman {\displaystyle w}w " or "a pair {\displaystyle (m,w)}{\displaystyle (m,w)} where {\displaystyle m}m is a man and {\displaystyle w}w is a woman"). The first three rules ensure that the objects (the boxes) defined by the olog's author are well-defined sets. The fourth rule
                                                                                                                                                                                                                                                                                                                                                                                        1. In mathematics, an operad is concerned with prototypical algebras that model properties such as commutativity or anticommutativity as well as various amounts of associativity. Operads generalize the various associativity properties already observed in algebras and coalgebras such as Lie algebras or Poisson algebras by modeling computational trees within the algebra. Algebras are to operads as group representations are to groups. An operad can be seen as a set of operations, each one having a fixed finite number of inputs (arguments) and one output, which can be composed one with others. They form a category-theoretic analog of universal algebra
                                                                                                                                                                                                                                                                                                                                                                                    2. Geovisualization or geovisualisation (short for geographic visualization), refers to a set of tools and techniques supporting the analysis of geospatial data through the use of interactive visualization. Like the related fields of scientific visualization[1] and information visualization[2] geovisualization emphasizes knowledge construction over knowledge storage or information transmission.[1] To do this, geovisualization communicates geospatial information in ways that, when combined with human understanding, allow for data exploration and decision-making processes.[1][3][4] Traditional, static maps have a limited exploratory capability; the graphical representations are inextricably linked to the geographical information beneath. GIS and geovisualization allow for more interactive maps; including the ability to explore different layers of the map, to zoom in or out, and to change the visual appearance of the map, usually on a computer display.[5] Geovisualization represents a set
                                                                                                                                                                                                                                                                                                                                                                                    3. describes interrelated things of interest in a specific domain of knowledge. A basic ER model is composed of entity types (which classify the things of interest) and specifies relationships that can exist between entities (instances of those entity types).
                                                                                                                                                                                                                                                                                                                                                                                      1. conceptual, logical, physical (database)
                                                                                                                                                                                                                                                                                                                                                                                        1. con: least granular detail but establishes the overall scope of what is to be included within the model set
                                                                                                                                                                                                                                                                                                                                                                                        2. An entity may be defined as a thing capable of an independent existence that can be uniquely identified. An entity is an abstraction from the complexities of a domain. When we speak of an entity, we normally speak of some aspect of the real world that can be distinguished from other aspects of the real world.[4] An entity is a thing that exists either physically or logically. An entity may be a physical object such as a house or a car (they exist physically), an event such as a house sale or a car service, or a concept such as a customer transaction or order (they exist logically—as a concept)
                                                                                                                                                                                                                                                                                                                                                                                          1. A relationship captures how entities are related to one another. Relationships can be thought of as verbs, linking two or more nouns. Examples: an owns relationship between a company and a computer, a supervises relationship between an employee and a department, a performs relationship between an artist and a song, a proves relationship between a mathematician and a conjecture, etc.
                                                                                                                                                                                                                                                                                                                                                                                            1. can map grammar ie verb, abdverbv, etc . role naming, cardinalities, and roles (can be name w verbs and phrases)
                                                                                                                                                                                                                                                                                                                                                                                              1. data structure diagram: documents the entities and their relationships, as well as the constraints that connect to them.
                                                                                                                                                                                                                                                                                                                                                                                      2. A tree structure or tree diagram is a way of representing the hierarchical nature of a structure in a graphical form. It is named a "tree structure" because the classic representation resembles a tree, even though the chart is generally upside down compared to a biological tree, with the "stem" at the top and the "leaves" at the bottom.
                                                                                                                                                                                                                                                                                                                                                                                        1. types:
                                                                                                                                                                                                                                                                                                                                                                                          1. classical node link w segments
                                                                                                                                                                                                                                                                                                                                                                                            1. nested sets , dyck language and newick
                                                                                                                                                                                                                                                                                                                                                                                              1. layered icicle: alignemnt and adjacceny
                                                                                                                                                                                                                                                                                                                                                                                                1. outlines
                                                                                                                                                                                                                                                                                                                                                                                                2. newick: a way of representing graph-theoretical trees with edge lengths using parentheses and comma
                                                                                                                                                                                                                                                                                                                                                                                    4. A sociogram is a graphic representation of social links that a person has. It is a graph drawing that plots the structure of interpersonal relations in a group situation.[1]
                                                                                                                                                                                                                                                                                                                                                                                      1. Sociograms were developed by Jacob L. Moreno to analyze choices or preferences within a group.[2][3] They can diagram the structure and patterns of group interactions. A sociogram can be drawn on the basis of many different criteria: Social relations, channels of influence, lines of communication etc. Those points on a sociogram who have many choices are called stars. Those with few or no choices are called isolates. Individuals who choose each other are known to have made a mutual choice. One-way choice refers to individuals who choose someone but the choice is not reciprocated. Cliques are groups of three or more people within a larger group who all choose each other (mutual choice). Sociograms are the charts or tools used to find the sociometry of a social space.
                                                                                                                                                                                                                                                                                                                                                                                    5. A radial tree, or radial map, is a method of displaying a tree structure (e.g., a tree data structure) in a way that expands outwards, radially
                                                                                                                                                                                                                                                                                                                                                                                      1. The overall distance "d" is the distance between levels of the graph. It is chosen so that the overall layout will fit within a screen. Layouts are generated by working outward from the center, root. The first level is a special case because all the nodes have the same parent. The nodes for level 1 can be distributed evenly, or weighted depending on the number of children they have. For subsequent levels, the children are positioned within sectors of the remaining space, so that child nodes of one parent do not overlap with others.
                                                                                                                                                                                                                                                                                                                                                                                    6. Layered graph drawing or hierarchical graph drawing is a type of graph drawing in which the vertices of a directed graph are drawn in horizontal rows or layers with the edges generally directed downwards
                                                                                                                                                                                                                                                                                                                                                                                    7. Formal concept analysis (FCA) is a principled way of deriving a concept hierarchy or formal ontology from a collection of objects and their properties. Each concept in the hierarchy represents the objects sharing some set of properties; and each sub-concept in the hierarchy represents a subset of the objects (as well as a superset of the properties) in the concepts above it. The term was introduced by Rudolf Wille in 1981, and builds on the mathematical theory of lattices and ordered sets that was developed by Garrett Birkhoff and others in the 1930s.

                                                                                                                                                                                                                                                                                                                                                                                      Anmerkungen:

                                                                                                                                                                                                                                                                                                                                                                                      • https://en.wikipedia.org/wiki/Formal_concept_analysis
                                                                                                                                                                                                                                                                                                                                                                                      1. The original motivation of formal concept analysis was the search for real-world meaning of mathematical order theory. One such possibility of very general nature is that data tables can be transformed into algebraic structures called complete lattices, and that these can be utilized for data visualization and interpretation. A data table that represents a heterogeneous relation between objects and attributes, tabulating pairs of the form "object g has attribute m", is considered as a basic data type. It is referred to as a formal context. In this theory, a formal concept is defined to be a pair (A, B), where A is a set of objects (called the extent) and B is a set of attributes (the intent) such that the extent A consists of all objects that share the attributes in B, and dually the intent B consists of all attributes shared by the objects in A. In this way, formal concept analysis formalizes the semantic notions of extension and intension.
                                                                                                                                                                                                                                                                                                                                                                                        1. Formal concept analysis aims at the clarity of concepts according to Charles S. Peirce's pragmatic maxim by unfolding observable, elementary properties of the subsumed objects.[3] In his late philosophy, Peirce assumed that logical thinking aims at perceiving reality, by the triade concept, judgement and conclusion. Mathematics is an abstraction of logic, develops patterns of possible realities and therefore may support rational communication. On this background, Wille defines: The aim and meaning of Formal Concept Analysis as mathematical theory of concepts and concept hierarchies is to support the rational communication of humans by mathematically developing appropriate conceptual structures which can be logically activated.
                                                                                                                                                                                                                                                                                                                                                                                          1. sorting: order dimension - matrixes, biclustering of matrix, knwoledge spaces
                                                                                                                                                                                                                                                                                                                                                                                          2. Real-world data is often given in the form of an object-attribute table, where the attributes have "values". Formal concept analysis handles such data by transforming them into the basic type of a ("one-valued") formal context. The method is called conceptual scaling.
                                                                                                                                                                                                                                                                                                                                                                                            1. Formal concept analysis has elaborate mathematical foundations,[4] making the field versatile. As a basic example we mention the arrow relations, which are simple and easy to compute, but very useful.
                                                                                                                                                                                                                                                                                                                                                                                              1. applications: triadic concepts analysis,: replaces the binary incidence relation between objects and attributes by a ternary relation between objects, attributes, and conditions. An incidence (g,m,c) then expresses that the object g has the attribute m under the condition c. Although triadic concepts can be defined in analogy to the formal concepts above, the theory of the trilattices formed by them is much less developed than that of concept lattices, and seems to be difficult.[9] Voutsadakis has studied the n-ary case.. apply to social
                                                                                                                                                                                                                                                                                                                                                                                                1. fozzy concept analysis
                                                                                                                                                                                                                                                                                                                                                                                                  1. Concept algebras: Modelling negation of formal concepts
                                                                                                                                                                                                                                                                                                                                                                                    8. A decision tree is a decision support tool that uses a tree-like model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm that only contains conditional control statements.
                                                                                                                                                                                                                                                                                                                                                                                      1. A decision tree is a flowchart-like structure in which each internal node represents a "test" on an attribute (e.g. whether a coin flip comes up heads or tails), each branch represents the outcome of the test, and each leaf node represents a class label (decision taken after computing all attributes). The paths from root to leaf represent classification rules. In decision analysis, a decision tree and the closely related influence diagram are used as a visual and analytical decision support tool, where the expected values (or expected utility) of competing alternatives are calculated.
                                                                                                                                                                                                                                                                                                                                                                                        1. A decision tree consists of three types of nodes:[1] Decision nodes – typically represented by squares Chance nodes – typically represented by circles End nodes – typically represented by triangles. also haave rules and symbol languages
                                                                                                                                                                                                                                                                                                                                                                                    9. https://en.wikipedia.org/wiki/List_of_concept-_and_mind-mapping_software
                                                                                                                                                                                                                                                                                                                                                                                      1. A concept map typically represents ideas and information as boxes or circles, which it connects with labeled arrows in a downward-branching hierarchical structure. The relationship between concepts can be articulated in linking phrases such as "causes", "requires", or "contributes to
                                                                                                                                                                                                                                                                                                                                                                                      2. Cladistics (/kləˈdɪstɪks/, from Greek κλάδος, kládos, "branch")[1] is an approach to biological classification in which organisms are categorized in groups ("clades") based on the most recent common ancestor. Hypothesized relationships are typically based on shared derived characteristics (synapomorphies) that can be traced to the most recent common ancestor and are not present in more distant groups and ancestors. A key feature of a clade is that a common ancestor and all its descendants are part of the clade. Importantly, all descendants stay in their overarching ancestral clade. For example, if within a strict cladistic framework the terms animals, bilateria/worms, fishes/vertebrata, or monkeys/anthropoidea were used, these terms would include humans. Many of these terms are normally used paraphyletically, outside of cladistics, e.g. as a 'grade'. Radiation results in the generation of new subclades by bifurcation, but in practice sexual hybridization may blur very closely related g
                                                                                                                                                                                                                                                                                                                                                                                        1. The cladistic method interprets each character state transformation implied by the distribution of shared character states among taxa (or other terminals) as a potential piece of evidence for grouping.[clarification needed] The outcome of a cladistic analysis is a cladogram – a tree-shaped diagram (dendrogram)[16] that is interpreted to represent the best hypothesis of phylogenetic relationships. Although traditionally such cladograms were generated largely on the basis of morphological characters and originally calculated by hand, genetic sequencing data and computational phylogenetics are now commonly used in phylogenetic analyses, and the parsimony criterion has been abandoned by many phylogeneticists in favor of more "sophisticated" but less parsimonious evolutionary models of character state transformation. Cladists contend that these models are unjustified.[why?] Every cladogram is based on a particular dataset analyzed with a particular method. Datasets are tables consisting of
                                                                                                                                                                                                                                                                                                                                                                                          1. molecular, morphological, ethological[17] and/or other characters and a list of operational taxonomic units (OTUs), which may be genes, individuals, populations, species, or larger taxa that are presumed to be monophyletic and therefore to form, all together, one large clade; phylogenetic analysis infers the branching pattern within that clade. Different datasets and different methods, not to mention violations of the mentioned assumptions, often result in different cladograms. Only scientific investigation can show which is more likely to be correct.
                                                                                                                                                                                                                                                                                                                                                                                            1. terms, coined by Hennig, are used to identify shared or distinct character states among groups
                                                                                                                                                                                                                                                                                                                                                                                    10. semantic network
                                                                                                                                                                                                                                                                                                                                                                                      1. semantic web
                                                                                                                                                                                                                                                                                                                                                                                      2. In informal logic and philosophy, an argument map or argument diagram is a visual representation of the structure of an argument. An argument map typically includes the key components of the argument, traditionally called the conclusion and the premises, also called contention and reasons.[1] Argument maps can also show co-premises, objections, counterarguments, rebuttals, and lemmas. There are different styles of argument map but they are often functionally equivalent and represent an argument's individual claims and the relationships between them.
                                                                                                                                                                                                                                                                                                                                                                                        1. Argument maps are useful not only for representing and analyzing existing writings, but also for thinking through issues as part of a problem-structuring process or writing process.[14] The use of such argument analysis for thinking through issues has been called "reflective argumentation".[15] An argument map, unlike a decision tree, does not tell how to make a decision, but the process of choosing a coherent position (or reflective equilibrium) based on the structure of an argument map can be represented as a decision tree.
                                                                                                                                                                                                                                                                                                                                                                                          1. Monroe Beardsley proposed a form of argument diagram in 1950.[12] His method of marking up an argument and representing its components with linked numbers became a standard and is still widely used. He also introduced terminology that is still current describing convergent, divergent and serial arguments.
                                                                                                                                                                                                                                                                                                                                                                                            1. Scriven advocated clarifying the meaning of the statements, listing them and then using a tree diagram with numbers to display the structure. Missing premises (unstated assumptions) were to be included and indicated with an alphabetical letter instead of a number to mark them off from the explicit statements. Scriven introduced counterarguments in his diagrams, which Toulmin had defined as rebuttal.[29] This also enabled the diagramming of "balance of consideration" arguments.[30]
                                                                                                                                                                                                                                                                                                                                                                                          2. Externalization: Writing something down and reviewing what one has written often helps
                                                                                                                                                                                                                                                                                                                                                                                            1. Anticipating replies
                                                                                                                                                                                                                                                                                                                                                                                      3. alpha, linnean, rank, numerical algorithimic, flynns. arranged in hierarchys.
                                                                                                                                                                                                                                                                                                                                                                                3. see loa notes
                                                                                                                                                                                                                                                                                                                                                                                4. Kalman filters
                                                                                                                                                                                                                                                                                                                                                                                  1. involves temporal measurements. https://en.wikipedia.org/wiki/Time_series
                                                                                                                                                                                                                                                                                                                                                                                    1. Time series data have a natural temporal ordering. This makes time series analysis distinct from cross-sectional studies, in which there is no natural ordering of the observations (e.g. explaining people's wages by reference to their respective education levels, where the individuals' data could be entered in any order). Time series analysis is also distinct from spatial data analysis where the observations typically relate to geographical locations (e.g. accounting for house prices by the location as well as the intrinsic characteristics of the houses). A stochastic model for a time series will generally reflect the fact that observations close together in time will be more closely related than observations further apart. In addition, time series models will often make use of the natural one-way ordering of time so that values for a given period will be expressed as deriving in some way from past values, rather than from future values (see time reversibility.) Time series analysis ca
                                                                                                                                                                                                                                                                                                                                                                                      1. linguistic process: https://www.researchgate.net/publication/220469148_Time-Series_Analysis_in_Linguistics_Application_of_the_ARIMA_Method_to_Cases_of_Spoken_Polish
                                                                                                                                                                                                                                                                                                                                                                                  2. Logical data models represent the abstract structure of a domain of information. They are often diagrammatic in nature and are most typically used in business processes that seek to capture things of importance to an organization and how they relate to one another. Once validated and approved, the logical data model can become the basis of a physical data model and form the design of a database. Logical data models should be based on the structures identified in a preceding conceptual data model, since this describes the semantics of the information context, which the logical model should also reflect. Even so, since the logical data model anticipates implementation on a specific computing system, the content of the logical data model is adjusted to achieve certain efficiencies. The term 'Logical Data Model' is sometimes used as a synonym of 'domain model' or as an alternative to the domain model. While the two concepts are closely related, and have overlapping goals, a domain model
                                                                                                                                                                                                                                                                                                                                                                                  3. The Product Space is a network that formalizes the idea of relatedness between products traded in the global economy.
                                                                                                                                                                                                                                                                                                                                                                                  4. what properties of visualizations increase people’s performance when solving Bayesian reasoning tasks. In the discussion of the properties of two visualizations, i.e., the tree diagram and the unit square, we emphasize how both visualizations make relevant subset relations transparent. Actually, the unit square with natural frequencies reveals the subset relation that is essential for the Bayes’ rule in a numerical and geometrical way whereas the tree diagram with natural frequencies does it only in a numerical way

                                                                                                                                                                                                                                                                                                                                                                                    Anmerkungen:

                                                                                                                                                                                                                                                                                                                                                                                    • https://www.frontiersin.org/articles/10.3389/fpsyg.2016.02026/full
                                                                                                                                                                                                                                                                                                                                                                                    1. In this situation, the probability that a woman who was selected at random and who received a positive test result actually has the disease can be calculated according to the Bayes’ rule. The resulting posterior probability P(H|D) where H is the hypothesis (having the disease) and D is the data (testing positive) is:
                                                                                                                                                                                                                                                                                                                                                                                      1. Although both the unit square and the tree diagram illustrate the nested-set structure of a Bayesian situation, they make nested-set structure transparent in different ways. Like Euler circles, the unit square shows the nested-set structure in the “Nested Style” (see introduction) by areas being included in other areas and therefore provides an image of sets being included in other sets. In Figure 2, we highlight different subset relations in the tree diagram and in the unit square. We arranged the subset relations in Figure 2 in the same order as they were addressed later in our test-items in Experiment 1 (in contrast, there was no highlighting in the test-items). On the right side of Figure 2, we show how subset relations are graphically made transparent in the unit square: we highlight subsets by areas marked gray and sets by areas framed by dotted lines.
                                                                                                                                                                                                                                                                                                                                                                                        1. n the unit square, subset relations can be grasped horizontally [e.g., subset relation (d) in Figure 2] as well as vertically [e.g., subset relation (a) in Figure 2]. The tree diagram, in contrast, represents the “Branch style” (see introduction) and visualizes the logical structure of subset relations by lines. The dotted arrows parallel to the branches in the tree diagrams (see Figure 2) highlight the logical relation between two sets when one set is the subset of another set. The tree implies a hierarchical structure and therefore only those subset relations that are in line with the hierarchy are graphically salient.
                                                                                                                                                                                                                                                                                                                                                                                          1. Further, in our research the relevant numbers have to be grasped from the visualizations. For this reason, someone could argue that effects measured referring to the performance in Bayesian reasoning tasks could quite simply be due to the effects of reading information. For this reason, we conducted a preliminary study to make sure that the unit square and the tree diagram were equally effective for extracting the relevant numbers; simple data are required for the numerator in the Bayes rule and sums over two summands are required for the denominator in the Bayes rule. For both reading of simple data and summarizing over extracted data, we can refer to a preliminary study with 77 undergraduates in which the unit square and the tree diagram were found to be equally effective (Böcherer-Linder et al., 2015). This was an important result because in our further steps of research we can exclude any bias from the effects of reading numerical information.
                                                                                                                                                                                                                                                                                                                                                                                            1. To introduce the visualizations, we did not teach the participants how to read the visualizations, but we used the brief description shown in Figure 3. In both experiments, we used the same introductory example. Those participants who received the questionnaire with the unit square received the description of the unit square, those who received the questionnaire with the tree diagram received the description of the tree diagram. In the brief description, we first provided the statistical information in the form of a table that had similarities with the unit square (see Figure 3). However, in the preliminary study mentioned above where we used the same introductory example (Böcherer-Linder et al., 2015), participants’ ability to read out information from the visualizations did not differ. Thus, we concluded that this description was not an advantage in favor of the unit square.
                                                                                                                                                                                                                                                                                                                                                                                              1. As a consequence of the graphical transparency of the relevant subset relation, the unit square outperformed the tree diagram in all of the four Bayesian reasoning situations in Experiment 2. We interpret this result as based on the graphical properties since both visualizations include the statistical information in the form of natural frequencies. Therefore, the unit square makes the nested-set structure of the problem transparent to a greater extent
                                                                                                                                                                                                                                                                                                                                                                                                1. In our study we focused on the property to visualize the nested-set structure of a Bayesian reasoning situation. Our results show further that the graphical visualization of nested sets impacts performance in Bayesian reasoning tasks. Finally, we showed that the unit square representing a “Nested style” is an effective visualization of Bayesian reasoning situations and can be used as a flexible display for risk communication as well as for mathematics education.
                                                                                                                                                                                                                                                                                                                                                                                    2. In information visualization and computing, treemapping is a method for displaying hierarchical data using nested figures, usually rectangles.
                                                                                                                                                                                                                                                                                                                                                                                      1. To create a treemap, one must define a tiling algorithm, that is, a way to divide a region into sub-regions of specified areas. Ideally, a treemap algorithm would create regions that satisfy the following criteria: A small aspect ratio—ideally close to one. Regions with a small aspect ratio (i.e, fat objects) are easier to perceive.[2] Preserve some sense of the ordering in the input data. Change to reflect changes in the underlying data. Unfortunately, these properties have an inverse relationship. As the aspect ratio is optimized, the order of placement becomes less predictable. As the order becomes more stable, the aspect ratio is degraded.
                                                                                                                                                                                                                                                                                                                                                                                        1. 6 algorithims with varying order, aspect ratios, and stabilities and depths
                                                                                                                                                                                                                                                                                                                                                                                  5. Parallel coordinates are a common way of visualizing high-dimensional geometry and analyzing multivariate data. To show a set of points in an n-dimensional space, a backdrop is drawn consisting of n parallel lines, typically vertical and equally spaced. A point in n-dimensional space is represented as a polyline with vertices on the parallel axes; the position of the vertex on the i-th axis corresponds to the i-th coordinate of the point. This visualization is closely related to time series visualization, except that it is applied to data where the axes do not correspond to points in time, and therefore do not have a natural order. Therefore, different axis arrangements may be of interest.
                                                                                                                                                                                                                                                                                                                                                                                  6. Multidimensional scaling (MDS) is a means of visualizing the level of similarity of individual cases of a dataset. MDS is used to translate "information about the pairwise 'distances' among a set of n objects or individuals" into a configuration of n points mapped into an abstract Cartesian space.[1] More technically, MDS refers to a set of related ordination techniques used in information visualization, in particular to display the information contained in a distance matrix. It is a form of non-linear dimensionality reduction.
                                                                                                                                                                                                                                                                                                                                                                                  7. Displaying hierarchical data as a tree suffers from visual clutter as the number of nodes per level can grow exponentially. For a simple binary tree, the maximum number of nodes at a level n is 2n, while the number of nodes for larger trees grows much more quickly. Drawing the tree as a node-link diagram thus requires exponential amounts of space to be displayed
                                                                                                                                                                                                                                                                                                                                                                                    1. A hyperbolic tree (often shortened as hypertree) is an information visualization and graph drawing method inspired by hyperbolic geometry
                                                                                                                                                                                                                                                                                                                                                                                      1. A basic hyperbolic tree. Nodes in focus are placed in the center and given more room, while out-of-focus nodes are compressed near the boundaries.
                                                                                                                                                                                                                                                                                                                                                                                  8. Graph drawing is an area of mathematics and computer science combining methods from geometric graph theory and information visualization to derive two-dimensional depictions of graphs arising from applications
                                                                                                                                                                                                                                                                                                                                                                                    1. A drawing of a graph or network diagram is a pictorial representation of the vertices and edges of a graph. This drawing should not be confused with the graph itself: very different layouts can correspond to the same graph.[2] In the abstract, all that matters is which pairs of vertices are connected by edges. In the concrete, however, the arrangement of these vertices and edges within a drawing affects its understandability, usability, fabrication cost, and aesthetics.[3] The problem gets worse if the graph changes over time by adding and deleting edges (dynamic graph drawing) and the goal is to preserve the user's mental map.[4]
                                                                                                                                                                                                                                                                                                                                                                                      1. The crossing number of a drawing is the number of pairs of edges that cross each other. If the graph is planar, then it is often convenient to draw it without any edge intersections; that is, in this case, a graph drawing represents a graph embedding. However, nonplanar graphs frequently arise in applications, so graph drawing algorithms must generally allow for edge crossings.[10] The area of a drawing is the size of its smallest bounding box, relative to the closest distance between any two vertices. Drawings with smaller area are generally preferable to those with larger area, because they allow the features of the drawing to be shown at greater size and therefore more legibly. The aspect ratio of the bounding box may also be important.
                                                                                                                                                                                                                                                                                                                                                                                        1. Symmetry display is the problem of finding symmetry groups within a given graph, and finding a drawing that displays as much of the symmetry as possible. Some layout methods automatically lead to symmetric drawings; alternatively, some drawing methods start by finding symmetries in the input graph and using them to construct a drawing.[11] It is important that edges have shapes that are as simple as possible, to make it easier for the eye to follow them. In polyline drawings, the complexity of an edge may be measured by its number of bends, and many methods aim to provide drawings with few total bends or few bends per edge. Similarly for spline curves the complexity of an edge may be measured by the number of control points on the edge.
                                                                                                                                                                                                                                                                                                                                                                                          1. Several commonly used quality measures concern lengths of edges: it is generally desirable to minimize the total length of the edges as well as the maximum length of any edge. Additionally, it may be preferable for the lengths of edges to be uniform rather than highly varied. Angular resolution is a measure of the sharpest angles in a graph drawing. If a graph has vertices with high degree then it necessarily will have small angular resolution, but the angular resolution can be bounded below by a function of the degree.[12] The slope number of a graph is the minimum number of distinct edge slopes needed in a drawing with straight line segment edges (allowing crossings). Cubic graphs have slope number at most four, but graphs of degree five may have unbounded slope number; it remains open whether the slope number of degree-4 graphs is bounded.
                                                                                                                                                                                                                                                                                                                                                                                        2. Types:
                                                                                                                                                                                                                                                                                                                                                                                          1. force-based layout systems, the graph drawing software modifies an initial vertex placement by continuously moving the vertices according to a system of forces based on physical metaphors related to systems of springs or molecular mechanics.
                                                                                                                                                                                                                                                                                                                                                                                            1. Spectral layout methods use as coordinates the eigenvectors of a matrix such as the Laplacian derived from the adjacency matrix of the graph.[15
                                                                                                                                                                                                                                                                                                                                                                                              1. Orthogonal layout methods, which allow the edges of the graph to run horizontally or vertically, parallel to the coordinate axes of the layout.
                                                                                                                                                                                                                                                                                                                                                                                                1. Tree layout algorithms these show a rooted tree-like formation, suitable for trees. Often, in a technique called "balloon layout", the children of each node in the tree are drawn on a circle surrounding the node, with the radii of these circles diminishing at lower levels in the tree so that these circles do not overlap
                                                                                                                                                                                                                                                                                                                                                                                                  1. Layered graph drawing methods (often called Sugiyama-style drawing) are best suited for directed acyclic graphs or graphs that are nearly acyclic, such as the graphs of dependencies between modules or functions in a software system. In these methods, the nodes of the graph are arranged into horizontal layers using methods such as the Coffman–Graham algorithm, in such a way that most edges go downwards from one layer to the next; after this step, the nodes within each layer are arranged in order to minimize crossings
                                                                                                                                                                                                                                                                                                                                                                                                    1. Arc diagrams, a layout style dating back to the 1960s,[19] place vertices on a line; edges may be drawn as semicircles above or below the line, or as smooth curves linked together from multiple semicircles.
                                                                                                                                                                                                                                                                                                                                                                                                      1. Circular layout methods place the vertices of the graph on a circle, choosing carefully the ordering of the vertices around the circle to reduce crossings and place adjacent vertices close to each other. Edges may be drawn either as chords of the circle or as arcs inside or outside of the circle. In some cases, multiple circles may be used.[
                                                                                                                                                                                                                                                                                                                                                                                                        1. Dominance drawing places vertices in such a way that one vertex is upwards, rightwards, or both of another if and only if it is reachable from the other vertex. In this way, the layout style makes the reachability relation of the graph visually apparent.
                                                                                                                                                                                                                                                                                                                                                                                                          1. Hasse diagrams, a type of graph drawing specialized to partial orders
                                                                                                                                                                                                                                                                                                                                                                                                            1. State diagrams, graphical representations of finite-state machines[
                                                                                                                                                                                                                                                                                                                                                                                                              1. Flowcharts and drakon-charts, drawings in which the nodes represent the steps of an algorithm and the edges represent control flow between steps.
                                                                                                                                                                                                                                                                                                                                                                                                                1. Data-flow diagrams, drawings in which the nodes represent the components of an information system and the edges represent the movement of information from one component to another.
                                                                                                                                                                                                                                                                                                                                                                                    2. A dendrogram is a diagram representing a tree. This diagrammatic representation is frequently used in different contexts: in hierarchical clustering, it illustrates the arrangement of the clusters produced by the corresponding analyses
                                                                                                                                                                                                                                                                                                                                                                                      1. In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:[1] Agglomerative: This is a "bottom-up" approach: each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy. Divisive: This is a "top-down" approach: all observations start in one cluster, and splits are performed recursively as one moves down the hierarchy. In general, the merges and splits are determined in a greedy manner. The results of hierarchical clustering[2] are usually presented in a dendrogram.
                                                                                                                                                                                                                                                                                                                                                                                    3. A concept map or conceptual diagram is a diagram that depicts suggested relationships between concepts.[1] It is a graphical tool that instructional designers, engineers, technical writers, and others use to organize and structure knowledge. A concept map typically represents ideas and information as boxes or circles, which it connects with labeled arrows in a downward-branching hierarchical structure. The relationship between concepts can be articulated in linking phrases such as "causes", "requires", or "contributes to"
                                                                                                                                                                                                                                                                                                                                                                                      1. well-made concept map grows within a context frame defined by an explicit "focus question", while a mind map often has only branches radiating out from a central picture. Some research evidence suggests that the brain stores knowledge as productions (situation-response conditionals) that act on declarative memory content, which is also referred to as chunks or propositions.
                                                                                                                                                                                                                                                                                                                                                                                        1. Communicating complex ideas and arguments Examining the symmetry of complex ideas and arguments and associated terminology Detailing the entire structure of an idea, train of thought, or line of argument (with the specific goal of exposing faults, errors, or gaps in one's own reasoning) for the scrutiny of others. Enhancing metacognition (learning to learn, and thinking about knowledge) Improving language ability
                                                                                                                                                                                                                                                                                                                                                                                    4. A cladogram (from Greek clados "branch" and gramma "character") is a diagram used in cladistics to show relations among organisms. A cladogram is not, however, an evolutionary tree because it does not show how ancestors are related to descendants, nor does it show how much they have changed; nevertheless, many evolutionary trees can be inferred from a single cladogram.[1][2][3][4][5] A cladogram uses lines that branch off in different directions ending at a clade, a group of organisms with a last common ancestor. There are many shapes of cladograms but they all have lines that branch off from other lines. The lines can be traced back to where they branch off. These branching off points represent a hypothetical ancestor (not an actual entity) which can be inferred to exhibit the traits shared among the terminal taxa above it.[4][6] This hypothetical ancestor might then provide clues about the order of evolution of various features, adaptation, and other evolutionary narratives about anc
                                                                                                                                                                                                                                                                                                                                                                                    5. A cartogram is a map in which some thematic mapping variable – such as travel time, population, or GNP – is substituted for land area or distance. The geometry or space of the map is distorted, sometimes extremely, in order to convey the information of this alternate variable. They are primarily used to display emphasis
                                                                                                                                                                                                                                                                                                                                                                                      1. relative sizes
                                                                                                                                                                                                                                                                                                                                                                                        1. linear cartogram manipulates linear distance on a line feature. The spatial distortion allows the map reader to easily visualize intangible concepts such as travel time and connectivity on a network. Distance cartograms are also useful for comparing such concepts among different geographic features
                                                                                                                                                                                                                                                                                                                                                                                          1. See link for algorithims of maps

                                                                                                                                                                                                                                                                                                                                                                                            Anmerkungen:

                                                                                                                                                                                                                                                                                                                                                                                            • https://en.wikipedia.org/wiki/Cartogram
                                                                                                                                                                                                                                                                                                                                                                                    6. Ontology
                                                                                                                                                                                                                                                                                                                                                                                      1. These provisions, a common understanding of information and explicit domain assumptions, are valuable because ontologies support data integration for analytics, apply domain knowledge to data, support application interoperability, enable model driven applications, reduce time and cost of application development, and improve data quality by improving meta data and provenance.
                                                                                                                                                                                                                                                                                                                                                                                        1. encompasses a representation, formal naming and definition of the categories, properties and relations between the concepts, data and entities that substantiate one, many or all domains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of concepts and categories that represent the subject.
                                                                                                                                                                                                                                                                                                                                                                                        2. metaphysics of the concepts: ne of five traditional branches of philosophy, metaphysics, is concerned with exploring existence through properties, entities and relations such as those between particulars and universals, intrinsic and extrinsic properties, or essence and existence.
                                                                                                                                                                                                                                                                                                                                                                                          1. Every academic discipline or field creates ontologies to limit complexity and organize data into information and knowledge. New ontologies improve problem solving within that domain. Translating research papers within every field is a problem made easier when experts from different countries maintain a controlled vocabulary of jargon between each of their languages
                                                                                                                                                                                                                                                                                                                                                                                          2. Individuals
                                                                                                                                                                                                                                                                                                                                                                                            1. Classes Sets, collections, concepts, classes in programming, types of objects
                                                                                                                                                                                                                                                                                                                                                                                              1. Attributes Aspects, properties, features, characteristics or parameters that objects (and classes) can have
                                                                                                                                                                                                                                                                                                                                                                                                1. Relations
                                                                                                                                                                                                                                                                                                                                                                                                  1. Function terms Complex structures formed from certain relations that can be used in place of an individual term in a statement
                                                                                                                                                                                                                                                                                                                                                                                                    1. Restrictions Formally stated descriptions of what must be true in order for some assertion to be accepted as input
                                                                                                                                                                                                                                                                                                                                                                                                      1. Rules Statements in the form of an if-then (antecedent-consequent) sentence that describe the logical inferences that can be drawn from an assertion in a particular form
                                                                                                                                                                                                                                                                                                                                                                                                        1. Axioms Assertions (including rules) in a logical form that together comprise the overall theory that the ontology describes in its domain of application. This definition differs from that of "axioms" in generative grammar and formal logic. In those disciplines, axioms include only statements asserted as a priori knowledge. As used here, "axioms" also include the theory derived from axiomatic statements
                                                                                                                                                                                                                                                                                                                                                                                                          1. Events The changing of attributes or relations Ontologies are commonly encoded using ontology languages.
                                                                                                                                                                                                                                                                                                                                                                            2. • Complex systems are composed of many elements, components, or particles. These elements are typically described by their state, velocity, position, age, spin, colour, wealth, mass, shape, and so on. Elements may have stochastic components. • Elements are not limited to physical forms of matter; anything that can interact and be described by states can be seen as generalized matter. • Interactions between elements may be specifc. Who interacts with whom, when, and in what form is described by interaction networks. • Interactions are not limited to the four fundamental forces, but can be of a complicated type. Generalized interactions are not limited to the exchange of gauge bosons, but can be mediated through exchange of messages, objects, gifts, information, even bullets, and so on.
                                                                                                                                                                                                                                                                                                                                                                              1. • Complex systems may involve superpositions of interactions of similar strengths. • Complex systems are often chaotic in the sense that they depend strongly on the initial conditions and details of the system. Update equations that algorithmically describe the dynamics are often non-linear. • Complex systems are often driven systems. Some systems obey conservation laws, some do not. • Complex systems can exhibit a rich phase structure and have a huge variety of macrostates that often cannot be inferred from the properties of the elements. This is sometimes referred to as emergence. Simple forms of emergence are, of course, already present in physics. The spectrum of the hydrogen atom or the liquid phase of water are emergent properties of the atoms involved and their interactions.
                                                                                                                                                                                                                                                                                                                                                                                1. Generalized interactions are described by the interaction type and who interacts with whom at what time and at what strength. If there are more than two interacting elements involved, interactions can be conveniently described by time-dependent networks, Mα ij(t), where i and j label the element in the system, and α denotes the interaction type. Mα ij(t) are matrix elements of a structure with three indices. The value Mα ij(t) indicates the strength of the interaction of type α between element i and j at time t. Mα ij(t)=0 means no interaction of that type. Interactions in complex systems remain based on the concept of exchange

                                                                                                                                                                                                                                                                                                                                                                                  Anmerkungen:

                                                                                                                                                                                                                                                                                                                                                                                  • https://polymer.bu.edu/hes/book-thurner18.pdf
                                                                                                                                                                                                                                                                                                                                                                                  1. Because of more specifc and time-varying interactions and the increased variety of types of interaction, the variety of macroscopic states and systemic properties increases drastically in complex systems. This diversity increase of macrostates and phenomena emerges from the properties both of the system’s components and its interactions. The phenomenon of collective properties arising that are, a priori, unexpected from the elements alone is sometimes called emergence. This is mainly a consequence of the presence of generalized interactions. Systems with time-varying generalized interactions can exhibit an extremely rich phase structure, and may be adaptive. Phases may co-exist in particular complex systems. The plurality of macrostates in a system leads to new types of questions that can be addressed, such as:What is the number of macrostates? What are their co-occurrence rates? What are the typical sequences
                                                                                                                                                                                                                                                                                                                                                                                    1. • For evolutionary systems, boundary conditions cannot usually be fxed. This means that it is impossible to take the system apart and separate it from its context without massively altering and perhaps even destroying it. The concept of reductionism is inadequate for describing evolutionary processes. • Evolutionary complex systems change their boundary conditions as they unfold in time. They co-evolve with their boundary conditions. Frequently, situations are diffcult or impossible to solve analytically. • For complex systems, the adjacent possible is a large set of possibilities. For physics, it is typically a very small set. • The adjacent possible itself evolves. • In many physical systems, the realization of the adjacent possible does not infuence the next adjacent possible; in evolutionary systems, it does. evolutionary systems cant be described with physics
                                                                                                                                                                                                                                                                                                                                                                                      1. Self-organized critical systems are dynamical, out-of-equilibrium systems that have a critical point as an attractor. These systems are characterized by (approximate) scale invariance or ‘scaling’. Scale invariance means the absence of characteristic scales, and it often manifests itself in the form of power laws in the associated probability distribution functions. Self-organized criticality is one of the classic ways of understanding the origin of power laws, which are omnipresent in complex systems. Other ways of understanding power laws include criticality, multiplicative processes with constraints, preferential dynamics, entropy methods, and sample space reducing processes
                                                                                                                                                                                                                                                                                                                                                                                        1. Emergence
                                                                                                                                                                                                                                                                                                                                                                                          1. Emergent properties are characteristic of complex systems . •  Systems of sufficient complexity will typically have properties that can’t be explained by breaking the system down into its elements. – Complex systems are self-organizing . •  When a system becomes sufficiently complex, order will spontaneously appear. – Co-evolution : All systems exist within their own environment and they are also part of that environment. •  As their environment changes they need to change to ensure best fit. •  But because they are part of their environment, when they change, they change their environment, and as it has changed they need to change again, and so it goes on as a constant process.
                                                                                                                                                                                                                                                                                                                                                                                            1. ates Opposing Views π Mark Bedau (1997) on strong emergence: –  "Although strong emergence is logically possible, it is uncomfortably like magic. How does an irreducible but supervenient downward causal power arise, since by definition it cannot be due to the aggregation of the micro-level potentialities? Such causal powers would be quite unlike anything within our scientific ken. This not only indicates how they will discomfort reasonable forms of materialism. Their mysteriousness will only heighten the traditional worry that emergence entails illegitimately getting something from nothing.
                                                                                                                                                                                                                                                                                                                                                                                              1. Philip Anderson noted (Anderson 1972): –  "The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe..The constructionist hypothesis breaks down when confronted with the twin difficulties of scale and complexity. At each level of complexity entirely new properties appear. Psychology is not applied biology, nor is biology applied chemistry. We can now see that the whole becomes not merely more, but very different from the sum of its parts."
                                                                                                                                                                                                                                                                                                                                                                                                1. 'emergent' refers to the state of being in continual process, never arriving, but always in transit π  'emergent' differs from 'emerging' because it gives rise to the possibility of a current state of being as a stage to a possible outcome always arising from its previous history and context. π  Accepts that human systems are not deterministic; rather, they are products of constant social negotiation and consensus building π Sees human systems as in the process of moving towards structure and may exhibit temporal regularities of behavior, but they are never fixed or structured. π  Accepts that there are emergent regularities but not unchanging relationships. π  Holds that there are no points of theoretical stasis, only emergent regularities, and those regularities are always shifting and evolving.
                                                                                                                                                                                                                                                                                                                                                                                                  1. iates CAS-52 Emergent Systems: Core Ideas π  Sophisticated behavior can result from simple interactions of simple things. π  One can often usefully and relatively rigorously distinguish two general classes of emergent systems –  Deterministic vs. Non-deterministic π  There seem to be able four rigorously distinguish aspects of emergent systems –  Agents, Environments, Observers/participants, Creator/architect/designer π  Changes in behavior can occur due to changes in the environment without corresponding changes in an agent. π Both the agent(s) and the observer(s ) can affect changes in the environment.
                                                                                                                                                                                                                                                                                                                                                                                                    1. A bidirectional relationship between an unchanging agent and an environment modifiable by the agent can produce behaviors that an observer may see as "purposive" even in a deterministic system. π  Behaviors that appear "purposive" to an observer do not depend on any representation of the "purpose" within the agent. π  Systems that exhibit "purposive" behavior need not depend on any conception of that "purpose" in the mind of a creator/architect/ designer π  Signs of "purpose", and even systems that exhibit what an observer would characterize as "purposive" behavior can come into existence simply because of indeterminate processes, i.e., need not involve minds at all. π  That a world does things that are surprising to an observer does not establish whether it is deterministic or not.
                                                                                                                                                                                                                                                                                                                                                                                                      1. EST Views organizational/social behavior from a perspective that replaces fixed structures with one of continuous social re-construction. π  Views process and 'becoming' as the default background and structure or regularities as the anomaly. π  Views organizational emergence as not simply organizational change
                                                                                                                                                                                                                                                                                                                                                                                                        1. The spontaneous emergence of large-scale spatial, temporal, or spatiotemporal order in a system of locally interacting, relatively simple components. π  Self-organization is a bottom-up process where complex organization emerges at multiple levels from the interaction of lower-level entities. The final product is the result of nonlinear interactions rather than planning and design, and is not known a priori. π  Contrast this with the standard, top-down engineering design paradigm where planning precedes implementation, and the desired final system is known by design.
                                                                                                                                                                                                                                                                                                                                                                                                          1. Self-configuration : –  An application is comprised of a set of abstract entities (a set of sevrices with certain relationships) –  When started, an application collects certain components and assembles itself –  New components join dynamically: real ‘plug-n-play’ π Self-optimization : –  All components must be optimal –  The system as a whole must be optimal –  These two can conflict –  There can be conflicting interests: multi-criteria optimization
                                                                                                                                                                                                                                                                                                                                                                                                            1. Self-healing : – System components must be self-healing (reliable, dependable, robust, etc ) – The system as a whole must be self-healing (tolerate failing components , incorrect state, etc ) π Self-protection : – Protect oneself against intrusion and attacks π  Self-reflection – Explicit knowledge representation: self-knowledge • Better in semantically rich and diverse environments • Plan and anticipate complex events (prediction ) – Ability to reason about and explain own behavior and state • More accessible administration interface • Higher level of trust from users
                                                                                                                                                                                                                                                                                                                                                                                                              1. Self-managing : –  Be able to maintain relationships with other elements –  Meet its obligations (agreements, policies) –  Be able to identify on its own what services it needs to fulfill its obligations –  Policies: •  Action policies –  If then rules •  Goal policies –  Requires self-model, planning, conceptual knowledge representation •  Utility function policies –  Numerical characterization of state – Needs methods to carry out actions to optimize utility (difficult)
                                                                                                                                                                                                                                                                                                                                                                                                                1. Dynamic:Dynamical systems are sets of time-dependent differential equations used to model physical or social phenomena: –  Whose state (or instantaneous description) changes over time –  Variability can be described causally –  Important causal influences can be contained within a closed system of feedback loops π  Three broad categories: – Generative : the objective is to predict future states of the system from observations of the past and present states of the system – Diagnostic : the objective is to infer what possible past states of the system might have led to the present state of the system (or observations leading up to the present state) – Explanatory : the objective is to provide a theory for the physical or social phenomena
                                                                                                                                                                                                                                                                                                                                                                                                                  1. iates Dynamical Systems π  Dynamic systems tend to exhibit two disparate kinematic processes: –  The emergence of new structures, paths, and processes from past states –  The emergence of repetitive cycles of phenomena (for better or worse) π  Dynamical systems are good for representing, explaining, and modeling change over time –  The system evolves in time according to a set of fixes rules –  Present conditions determine the future (Laplacian assumption) –  The rules are usually nonlinear –  There are many (many) interacting variables
                                                                                                                                                                                                                                                                                                                                                                                                                    1. Knowing how to specify the dependencies f(x1, x2, x3, ..., xn ) in closed form –  Many times we only know qualitative rules π Non-stationarity of coefficients –  E.g., sensitivities can change π  Feed-back and feed-forward: control and anticipation –  People and groups think; planets do not –  Free will π  Structural instability – Functional relations can change over time (diachronics ) –  Dependencies can form and dissolve π  Created for celestial mechanics, not human and social dynamics –  Most valid as theoretical models to understand a limited range of highly aggregate qualitative behavior (e.g., demography, macroeconomics) π  Sensitive Dependence on initial conditions –  Measurement error, Quantum effects, etc.
                                                                                                                                                                                                                                                                                                                                                                                                                      1. Conways game of life 1 and 2
                                                                                                                                                                                                                                                                                                                                                                                                                        1. tes Why Is Life Interesting? π  Game of Life lets us experiment with systems where we don't know all the rules –  Study simple systems to learn basic rules –  Build on this knowledge to construct more complex systems –  The rules are all that is needed to discover 'new' phenomena within the Life universe –  Initial boundary conditions (e.g., the starting pattern(s)) lead to interesting phenomena, just as in the real world π  Key Challenge! –  At what point can systems such as Life be presumed to model the real world? In complexity? In fidelity? –  Question: Can we adequately explain complex systems with sets of simple rules?
                                                                                                                                                                                                                                                                                                                                                                                                                          1. http://cs.gmu.edu/~eclab/projects/mason/
                                                                                                                                                                                                                                                                                                                                                                                                              2. dual phase evolution (DPE) is a process that drives self-organization within complex adaptive systems.[1] It arises in response to phase changes within the network of connections formed by a system's components.

                                                                                                                                                                                                                                                                                                                                                                                                                Anmerkungen:

                                                                                                                                                                                                                                                                                                                                                                                                                • https://en.wikipedia.org/wiki/Dual-phase_evolution
                                                                                                                                                                                                                                                                                                                                                                                                                1. DPE occurs where a system has an underlying network. That is, the system's components form a set of nodes and there are connections (edges) that join them. For example, a family tree is a network in which the nodes are people (with names) and the edges are relationships such as "mother of" or "married to". The nodes in the network can take physical form, such as atoms held together by atomic forces, or they may be dynamic states or conditions, such as positions on a chess board with moves by the players defining the edges.
                                                                                                                                                                                                                                                                                                                                                                                                                  1. Graphs and networks have two phases: disconnected (fragmented) and connected. In the connected phase every node is connected by an edge to at least one other node and for any pair of nodes, there is at least one path (sequence of edges) joining them. The Erdős–Rényi model shows that random graphs undergo a connectivity avalanche as the density of edges in a graph increases.[2] This avalanche amounts to a sudden phase change in the size of the largest connected subgraph. In effect, a graph has two phases: connected (most nodes are linked by pathways of interaction) and fragmented (nodes are either isolated or form small subgraphs). These are often referred to as global and local phases, respectively.
                                                                                                                                                                                                                                                                                                                                                                                                                    1. In each of the two phases, the network is dominated by different processes.[1] In a local phase, the nodes behave as individuals; in the global phase, nodes are affected by interactions with other nodes. Most commonly the two processes at work can be interpreted as variation and selection. Variation refers to new features, which typically appear in one of the two phases. These features may be new nodes, new edges, or new properties of the nodes or edges. Selection here refers to ways in which the features are modified, refined, selected or removed. A
                                                                                                                                                                                                                                                                                                                                                                                                                      1. The effects of changes in one phase carry over into the other phase. This means that the processes acting in each phase can modify or refine patterns formed in the other phase. For instance, in a social network, if a person makes new acquaintances during a global phase, then some of these new social connections might survive into the local phase to become long-term friends. In this way, DPE can create effects that may be impossible if both processes act at the same time.
                                                                                                                                                                                                                                                                                                                                                                                                                        1. In physics, self-organized criticality (SOC) is a property of dynamical systems that have a critical point as an attractor. Their macroscopic behavior thus displays the spatial or temporal scale-invariance characteristic of the critical point of a phase transition, but without the need to tune control parameters to a precise value, because the system, effectively, tunes itself as it evolves towards criticality.
                                                                                                                                                                                                                                                                                                                                                                                                                          1. Crucially, however, the paper emphasized that the complexity observed emerged in a robust manner that did not depend on finely tuned details of the system: variable parameters in the model could be changed widely without affecting the emergence of critical behavior: hence, self-organized criticality. Thus, the key result of BTW's paper was its discovery of a mechanism by which the emergence of complexity from simple local interactions could be spontaneous—and therefore plausible as a source of natural complexity—rather than something that was only possible in artificial situations in which control parameters are tuned to precise critical values. T
                                                                                                                                                                                                                                                                                                                                                                                                                            1. n addition to the nonconservative theoretical model mentioned above, other theoretical models for SOC have been based upon information theory[18], mean field theory[19], the convergence of random variables[20], and cluster formation.[21] A continuous model of self-organised criticality is proposed by using tropical geometry.
                                                                                                                                                                                                                                                                                                                                                                                                              3. Self organization:
                                                                                                                                                                                                                                                                                                                                                                                          2. small amts of input can produce large directed changes: lever points
                                                                                                                                                                                                                                                                                                                                                                                          3. Computational complexity: –  How long a program runs (or how much memory it uses). –  Asymptotic. π  Language complexity (Formal Language Theory): –  Classes of languages that can be computed (recognized) by different kinds of abstract machines. –  Decidability, computability. π  Information-theoretic approaches (after Shannon and Brillouin): –  Algorithmic Complexity (Solomonoff, Komogorov, and Chaitin): •  Length of the shortest program that can produce the phenomenon. –  Mutual information (many authors) π  Logical depth (Bennett). π Thermodynamic depth (Lloyd and Pagels)
                                                                                                                                                                                                                                                                                                                                                                                            1. The ability to adapt depends on the observer who chooses the scale and granularity of description π  An adaptive system is necessarily complex, but the obverse is not necessarily true.
                                                                                                                                                                                                                                                                                                                                                                                            2. info: https://www.mendeley.com/viewer/?fileId=2731a026-9535-dd9d-4866-f29698c361d7&documentId=52f4d0be-1dee-3dc4-af46-6fb4971bc697
                                                                                                                                                                                                                                                                                                                                                                                            3. This is radically different for complex systems, where interactions themselves can change over time as a consequence of the dynamics of the system. In that sense, complex systems change their internal interaction structure as they evolve. Systems that change their internal structure dynamically can be viewed as machines that change their internal structure as they operate. However, a description of the operation of a machine using analytical equations would not be effcient. Indeed, to describe a steam engine by seeking the corresponding equations of motion for all its parts would be highly ineffcient. Machines are best described as algorithms—a list of rules regarding how the dynamics of the system updates its states and future interactions, which then lead to new constraints on the dynamics at the next time step.
                                                                                                                                                                                                                                                                                                                                                                                              1. Algorithmic descriptions describe not only the evolution of the states of the components of a system, but also the evolution of its internal states (interactions) that will determine the next update of the states at the next time step.Many complex systems work in this way: states of components and the interactions between them are simultaneously updated, which can lead to the tremendous mathematical diffculties that make complex systems so complicated to understand. T
                                                                                                                                                                                                                                                                                                                                                                                                1. Often, computer simulations are the only way of studying and developing insights into the dynamics of complex systems. Simulations are often referred to by agent-based models, where elements and their interactions are modelled and simulated dynamically. Agent-based models allow us to study the collective outcomes of systems comprising elements with specifc properties and interactions. The algorithmic description of systems in terms of update rules for states and interactions is fully compatible with the way computer simulations are done. In many real-world situations there is only a single history and this cannot be repeated. This occurs in social systems, in the economy, or in biological evolution. Computer simulations allow us to create artifcial histories that are statistically equivalent copies. Ensembles of histories can be created that help us understand the systemic properties of the systems that lead to them.Without simulations, predictive statements about systemic properties
                                                                                                                                                                                                                                                                                                                                                                                                  1. slike robustness, resilience, effciency, likelihood of collapse, and so on, would never be possible. The possibility of creating artifcial histories solves the problem of repeatability.
                                                                                                                                                                                                                                                                                                                                                                                                    1. s. The bottleneck for progress is the theory of complex systems, a mathematically consistent framework in which data can be systematically transformed into useful knowledge.
                                                                                                                                                                                                                                                                                                                                                                                    2. https://math.stackexchange.com/questions/75566/first-order-logic-nested-quantifiers-for-same-variables
                                                                                                                                                                                                                                                                                                                                                                              2. Fuzzy sets: uzzy logic is a superset of conventional (Boolean) logic that has been extended to handle the concept of partial truth -- truth values between "completely true" and "completely false". It was introduced by Dr. Lotfi Zadeh of UC/Berkeley in the 1960's as a means to model the uncertainty of natural language. Zadeh says that rather than regarding fuzzy theory as a single theory, we should regard the process of ``fuzzification'' as a methodology to generalize ANY specific theory from a crisp (discrete) to a continuous (fuzzy) form (see "extension principle" in [2]). Thus recently researchers have also introduced "fuzzy calculus", "fuzzy differential equations",

                                                                                                                                                                                                                                                                                                                                                                                Anmerkungen:

                                                                                                                                                                                                                                                                                                                                                                                •  http://www.cs.cmu.edu/Groups/AI/html/faqs/ai/fuzzy/part1/faq-doc-2.html   https://sci-hub.se/https://doi.org/10.1016/0165-0114(79)90018-6 https://www.sciencedirect.com/topics/engineering/fuzzy-set-theoryhttps://www.tutorialspoint.com/fuzzy_logic/fuzzy_logic_set_theory.htm
                                                                                                                                                                                                                                                                                                                                                                                1. In classical set theory, a subset U of a set S can be defined as a mapping from the elements of S to the elements of the set {0, 1}, U: S --> {0, 1} This mapping may be represented as a set of ordered pairs, with exactly one ordered pair present for each element of S. The first element of the ordered pair is an element of the set S, and the second element is an element of the set {0, 1}. The value zero is used to represent non-membership, and the value one is used to represent membership. The truth or falsity of the statement x is in U is determined by finding the ordered pair whose first element is x. The statement is true if the second element of the ordered pair is 1, and the statement is false if it is 0.
                                                                                                                                                                                                                                                                                                                                                                                  1. Similarly, a fuzzy subset F of a set S can be defined as a set of ordered pairs, each with the first element from S, and the second element from the interval [0,1], with exactly one ordered pair present for each element of S. This defines a mapping between elements of the set S and values in the interval [0,1]. The value zero is used to represent complete non-membership, the value one is used to represent complete membership, and values in between are used to represent intermediate DEGREES OF MEMBERSHIP. The set S is referred to as the UNIVERSE OF DISCOURSE for the fuzzy subset F. Frequently, the mapping is described as a function, the MEMBERSHIP FUNCTION of F. The degree to which the statement x is in F is true is determined by finding the ordered pair whose first element is x. The DEGREE OF TRUTH of the statement is the second element of the ordered pair. In practice, the terms "membership function" and fuzzy subset get used interchangeably.
                                                                                                                                                                                                                                                                                                                                                                                    1. Logic Operations: Now that we know what a statement like "X is LOW" means in fuzzy logic, how do we interpret a statement like X is LOW and Y is HIGH or (not Z is MEDIUM) The standard definitions in fuzzy logic are: truth (not x) = 1.0 - truth (x) truth (x and y) = minimum (truth(x), truth(y)) truth (x or y) = maximum (truth(x), truth(y)) Some researchers in fuzzy logic have explored the use of other interpretations of the AND and OR operations, but the definition for the NOT operation seems to be safe.
                                                                                                                                                                                                                                                                                                                                                                                      1. Note that if you plug just the values zero and one into these definitions, you get the same truth tables as you would expect from conventional Boolean logic. This is known as the EXTENSION PRINCIPLE, which states that the classical results of Boolean logic are recovered from fuzzy logic operations when all fuzzy membership grades are restricted to the traditional set {0, 1}. This effectively establishes fuzzy subsets and logic as a true generalization of classical set theory and logic. In fact, by this reasoning all crisp (traditional) subsets ARE fuzzy subsets of this very special type; and there is no conflict between fuzzy and crisp methods.
                                                                                                                                                                                                                                                                                                                                                                                        1. In this example, the sets (or classes) are numbers that are negative large, negative medium, negative small, near zero, positive small, positive medium, and positive large. The value, µ, is the amount of membership in the set.
                                                                                                                                                                                                                                                                                                                                                                                          1. A fuzzy expert system is an expert system that uses a collection of fuzzy membership functions and rules, instead of Boolean logic, to reason about data. The rules in a fuzzy expert system are usually of a form similar to the following: if x is low and y is high then z = medium where x and y are input variables (names for know data values), z is an output variable (a name for a data value to be computed), low is a membership function (fuzzy subset) defined on x, high is a membership function defined on y, and medium is a membership function defined on z. The antecedent (the rule's premise) describes to what degree the rule applies, while the conclusion (the rule's consequent) assigns a membership function to each of one or more output variables. Most tools for working with fuzzy expert systems allow more than one conclusion per rule. The set of rules in a fuzzy expert system is known as the rulebase or knowledge base.
                                                                                                                                                                                                                                                                                                                                                                                            1. 1. Under FUZZIFICATION, the membership functions defined on the input variables are applied to their actual values, to determine the degree of truth for each rule premise.
                                                                                                                                                                                                                                                                                                                                                                                              1. 2. Under INFERENCE, the truth value for the premise of each rule is computed, and applied to the conclusion part of each rule. This results in one fuzzy subset to be assigned to each output variable for each rule. Usually only MIN or PRODUCT are used as inference rules. In MIN inferencing, the output membership function is clipped off at a height corresponding to the rule premise's computed degree of truth (fuzzy logic AND). In PRODUCT inferencing, the output membership function is scaled by the rule premise's computed degree of truth.
                                                                                                                                                                                                                                                                                                                                                                                                1. 3. Under COMPOSITION, all of the fuzzy subsets assigned to each output variable are combined together to form a single fuzzy subset for each output variable. Again, usually MAX or SUM are used. In MAX composition, the combined output fuzzy subset is constructed by taking the pointwise maximum over all of the fuzzy subsets assigned tovariable by the inference rule (fuzzy logic OR). In SUM composition, the combined output fuzzy subset is constructed by taking the pointwise sum over all of the fuzzy subsets assigned to the output variable by the inference rule.
                                                                                                                                                                                                                                                                                                                                                                                                  1. 4. Finally is the (optional) DEFUZZIFICATION, which is used when it is useful to convert the fuzzy output set to a crisp number. There are more defuzzification methods than you can shake a stick at (at least 30). Two of the more common techniques are the CENTROID and MAXIMUM methods. In the CENTROID method, the crisp value of the output variable is computed by finding the variable value of the center of gravity of the membership function for the fuzzy value. In the MAXIMUM method, one of the variable values at which the fuzzy subset has its maximum truth value is chosen as the crisp value for the output variable.
                                                                                                                                                                                                                                                                                                                                                                                            2. boolean change: https://en.wikipedia.org/wiki/Boolean_differential_calculus
                                                                                                                                                                                                                                                                                                                                                                            3. Language: linguistic logic determines the way we think of world
                                                                                                                                                                                                                                                                                                                                                                              1. Form of system for 1 phenomena depends on linguistic structure. language -> logic -> metaphysics -> paradigms and systems
                                                                                                                                                                                                                                                                                                                                                                                1. Science systems as language: reevalaute other language systems such as Hopi and change our mental linguistic logic to make new systems. At a cetain level of abstraction, newtons mechanisms are cultural and look the way they do because of his language
                                                                                                                                                                                                                                                                                                                                                                                  1. Objects within the greater system: we react to small factors cut out of larger surroundings, forming a new item, umwelt. the characterists of of the object change as we view it differently/shape it
                                                                                                                                                                                                                                                                                                                                                                                    1. Perception of space depends on organization of observer, ie leading to different forms of time to describe change percieved.
                                                                                                                                                                                                                                                                                                                                                                                      1. 1/ our perception is all same, but 2. conception is different. do not confuse the 2
                                                                                                                                                                                                                                                                                                                                                                                      2. How we screen the information:
                                                                                                                                                                                                                                                                                                                                                                                        1. Screening as a passage where differences pass and criss cross. Logic as a filter.
                                                                                                                                                                                                                                                                                                                                                                                          1. Screening as a display: concealing is showing and showing is concealing
                                                                                                                                                                                                                                                                                                                                                                                            1. Sibjects are the screen and knowing is screening
                                                                                                                                                                                                                                                                                                                                                                                            2. Exformation: discarded information when we make sentences. Patterns in systems are context dependent. Overall structure depends on what the context mean.
                                                                                                                                                                                                                                                                                                                                                                                              1. The patterns for filtering processing operate at difernet levels and mulitple medias. Particular -> general relies on screening. Media: physical perception and conceptual.
                                                                                                                                                                                                                                                                                                                                                                                                1. Relations b/w objects are intrinsic to the identity of particular objects. When an object Sa interacts with object Oa it relates to all other objects in the network (oi, etc) . And the subjects interreatles itself an an object. When subject Sa becomes conscious of itself as a ubject of conscious objects (Sa - Oa) self consciousness doubles and and comes into mediated relation. with the rest of the objects. See page 208 and 213 for diagram of these.
                                                                                                                                                                                                                                                                                                                                                                                                  1. Subjectivity and objectivity are always alive and evolving within knowledge. They are nodes. Networks do not exist without the nodalization and self consciousness relexively
                                                                                                                                                                                                                                                                                                                                                                                                    1. Myth and narrative creation: transform and collapse
                                                                                                                                                                                                                                                                                                                                                                                              2. Outside world open, abstract worlds like logic closed
                                                                                                                                                                                                                                                                                                                                                                                              3. Relation. b/w language and conception are reciprocal. Structure of lang determines which parts of reality will be abstracted, prinicples taught determines language
                                                                                                                                                                                                                                                                                                                                                                                        2. Agents
                                                                                                                                                                                                                                                                                                                                                                                          1. Their experience with a lifeworld: The lifeworld can be thought of as the horizon of all our experiences, in the sense that it is that background on which all things appear as themselves and meaningful. The lifeworld cannot, however, be understood in a purely static manner; it isn't an unchangeable background, but rather a dynamic horizon in which we live, and which "lives with us" in the sense that nothing can appear in our lifeworld except as lived.
                                                                                                                                                                                                                                                                                                                                                                                            1. Even if a person's historicity is intimately tied up with his lifeworld, and each person thus has a lifeworld, this doesn't necessarily mean that the lifeworld is a purely individual phenomenon. In keeping with the phenomenological notion of intersubjectivity, the lifeworld can be intersubjective even though each individual necessarily carries his own "personal" lifeworld ("homeworld"); meaning is intersubjectively accessible, and can be communicated (shared by one's "homecomrades"). However, a homeworld is also always limited by an alienworld. The internal "meanings" of this alienworld can be communicated, but can never be apprehended as alien; the alien can only be appropriated or assimilated into the lifeworld, and only understood on the background of the lifeworld.
                                                                                                                                                                                                                                                                                                                                                                                              1. On the one hand a person's own reality is her subjective construct. On the other hand this construct—in spite of all subjectivity—is not random: Since a person is still linked to her environment, her own reality is influenced by the conditions of this environment
                                                                                                                                                                                                                                                                                                                                                                                                1. "Life conditions mean a person's material and immaterial circumstances of life. Lifeworld means a person's subjective construction of reality, which he or she forms under the condition of his or her life circumstances."
                                                                                                                                                                                                                                                                                                                                                                                                  1. "thoughts on a constructivist comprehension of lifeworlds contours the integration of micro-, meso- and macroscopic approaches. This integration is not only necessary in order to relate the subjective perspectives and the objective frame conditions to each other but also because the objective frame conditions obtain their relevance for the subjective lifeworlds not before they are perceived and assessed." - LoAs
                                                                                                                                                                                                                                                                                                                                                                                                  2. language naratives (narr = lang + memory) as systems
                                                                                                                                                                                                                                                                                                                                                                                                2. Types of agents: 1. aggregate agent - behavior depends on interactions of members in the network. aggregation can add hierarchival lvls. Mmembers are adaptive.
                                                                                                                                                                                                                                                                                                                                                                                                  1. Behavior: if-then reactions. Stimuli. Establish posible stimuki, responses, and rules agent can have. 1. stimulu 2. see its responses 3. then you can see its world
                                                                                                                                                                                                                                                                                                                                                                                                  2. respond to if-then stimulus. define set of response rules: describe what info the agent can recieve and the responses it can give. major part of modeling system is selecting and representing stimuli and responses bc strategies need it . once we specify range of possible stimuli and the set of allowed responses for a given agent we have determined the rules the agent cna have. then we arrive at behaviors. now adaptation.
                                                                                                                                                                                                                                                                                                                                                                                                    1. changes in stratgy structure based on temporal system expericen
                                                                                                                                                                                                                                                                                                                                                                                                      1. each rule ie if then = a microagent; input output role of msgs provides for interactions. each rule has its own detectors and effectors: if there is a msg of the right kind, then send a specified msg. these are msg processing rules. some rules act on detercot rmsgs, process info from enviro, or msgs sent by other rules. ex: if object contered then send @, if @ then extend tongue,. agents pairs info with rules see pg 47 of cas.
                                                                                                                                                                                                                                                                                                                                                                                                        1. rules for msgs: say all msgs are strings of 1s and 0s and all turing computable, L = standard length. set of all possible msgs: (1,0)^L
                                                                                                                                                                                                                                                                                                                                                                                                          1. conditions of rules; which msgs the rules responds to: introduce via new symbol that means "anything is acceptable at this position, i dont care about this stimuli" ex: 1####...#^L = condition part of rule. rule 1. if moving #...# then flee. rule 2. if moving # small near then approach. confition that uses #s accepts a more general and wider range of msgs. . finally with 2 sets M=(1,)^L and C= (1,0,#)^L, we can describe the behavior of agents in symbols. these rules generate the behaior.
                                                                                                                                                                                                                                                                                                                                                                                                            1. full computaional power for agentsL IF ___ AND IF __ THEN __, and we need negation:IF NOT __ THEN__
                                                                                                                                                                                                                                                                                                                                                                                                            2. DIFF USES OF MSGS IN SYSTEM. IE DETECOT R MSGS HAVE BUILT IN MEANING ASSIGNED BY ENVIRO DETECTED. RULE MSGS DO NOT HAVE ASSIGNED Meaning other than when they actives effectors/action. rule msgs do not come from environment, like some god. several rule msgs can be active simultanously. msgs can be building blocks for modeling complex situations. agents have inventoies: msg lists that stores all current msgs. no single rule for every situation. we handle the current rule lists with logic. - logic of existence and logic of situation. rules bcome building blocks in mental model. see pg 52 for graph of a situation w rules.
                                                                                                                                                                                                                                                                                                                                                                                                              1. rules = set of facts about agents environemt. all rules must be logically consistent . rules could also be hypothesis , if it fials then we go to other rules and test them. assign each rule a strength that over time reflects usefulnes to system. modify strength based on experience: credit assignemnt. easy when environment has a direct payoof like a reward for an action. we base credit assignemnt on what suseful.. pavlov. bucket brigade algorithim:strengthens rules that that belong to a chain of aactions ending in rewawrds, it is a process of hypothesis confirmation with stage setting and subgoals of ultimate goal.
                                                                                                                                                                                                                                                                                                                                                                                                                1. agent prefers rules that use more info abt a situation. ### accepts any msg so it doesnt give us any info when satisfied. make big proportional to product of strength and specifity. so if etiher the strength or the specifity is close to 0, the bid will also be close to 0, only if both are large will the bid be large. . rule competition: certain rules can give certain amoutns of information. the more specific ones are better. see pg 58 for chart.
                                                                                                                                                                                                                                                                                                                                                                                                                  1. 2 rules can seem contradictory if htey use 2 diff amts of info but woork with symbiosis. when the weaker rule makes a mistake, it loses strength, and stronger rule wins it save rule 1 the loss. rule 2 even tho it cant exist at the same time as rule 1, it benefits it. this is a more thorough model of the environemtn. it is easier to test and discovere a general rule than a specific one. as experience increases, intenrla models will be modifie dby adding competing more specific excpetin rules. they interact symbiotically w default rules. this = default hierarchy . they expand over time from general to specific, this is change. we can adapt by discovering rules. discovered via tested blocksm making it more plausiblity if the blocks have been testsed. building blocks for rules: exploit the vvalues at selected position in rule string to potentially make blocks,. ie is it useful to start with 1 as a first position in a rule. ; values at indiviual positions = building blocks. ie finding
                                                                                                                                                                                                                                                                                                                                                                                                                    1. fitness in alleles: fitness = sum of contributions of bulding blocks . find contribution of each first. bc o fif-then rules, the evluation of the parts of the rule will only happen in the environments its designed for. let buldin glbocks use more than one position in a string: nonlinear. ie the block encompasses first 3 positions (and combine them. picking and choosing blocks=schemas. ind. schema (1,0,#,*0^L = set of schmata for all conditions that use that block. conditin is identified with the set of msgs it can accept, schema is identified with set of conditions that contain it as a block. ie * = conditions, # = msgs. now we generate new rules
                                                                                                                                                                                                                                                                                                                                                                                                                      1. strong rules = succesful, fit parents producing children that become parents. offpsring coexist w parents and replace other weaker contenders in enviro. strong rules = knowledge won, core o f internal model. offpsring are not identical to parents, can be new hypthesis and tested against enviro. parents can appear (traits in combos) = corssing over. see pg 67 for chart, strength of rules; value of blocks when only data = strength of rule: schemas will make up a large part of the urles if there is a lot of tules eg they will start with 1 a lot. this = strength. compare strangth of rules w 1 to overall avg strength of rules. . calls for repeated expiermntation with use of rules. each is connected in landscape; each schema is a point in landscaoe and correpsing avg of schemas is the height of the landscape at that point,. schemata=subsets of the space of possibilites; complicated lattice of inclusions and intersections.
                                                                                                                                                                                                                                                                                                                                                                                                                        1. producing a new generation from current: 1. reproduction according to fitness. select strings from current pop ie current set of rules for agent to act as parents. the more fit the string the stronger the fule. more likely to be chosen as parent. 2. recombination. parent strings are paired, crossed, mutated to produced offspring strings. 3. replacement. the offspring strings replace randmly chosen strings in current pop. repeated over and over to made many gerneations.
                                                                                                                                                                                                                                                                                                                                                                                                                          1. say # of offspring determined by fitness measure in generation. add up the instances of block/rule that also has a fitness with 1 (the pop). diff results give higher or lower answers , which = less or more offspring in next geernation ofo that type. pg 72 full math. 77 for mutation probablity
                                                                                                                                                                                                                                                                                                                                                                                                                            1. making complex structures with nat selection: Echo. in complex systems, they are always changing so its hard to assign one fitness llvl for ex, when agents exist in a context of other agents. we need class of models where welfare of agent stems from interactions and not fitness function: echo.
                                                                                                                                                                                                                                                                                                                                                                                                                              1. 1. simple, not full enough for rela systems. interactions are constrained. 2. agents are predictable in wide range of cas systems settings. nidek should provide for the study of interactions of agents that are distrubed in a space and are mobile. assign inputs ie stimuli and resources to different sites in space when desired. 3. can expeiremtn with fitness.fitness should not be an externla property, but depend on context provided by site and other gaents at site. fitness of agents changes as system evolves. 4. primitive mexhanisms should have ready conterparts in a all systems. 5. incporaote well known models of particular systems where possible. we make bridges to paradigmatic models that are useful abstractions; special cases. 6 . amenable to math. analysis , to hlpe arrive valid generations from specific simulations. corresponces supply math landmaeks that we can link into a map in the simulations. step by step: each step adds 1 additional mechanism or modification and describes wh
                                                                                                                                                                                                                                                                                                                                                                                                                                1. at was gained. as more mechanisms, means for collecting resources expands. . they are renewable. resources are combined into strings. geograhy = set of interconnectied sites
                                                                                                                                                                                                                                                                                                                                                                                                                                  1. see pg 100 and on for full steps of using and applying echo. use in my systems?
                                                                                                                                                                                                                                                                                                                                                                                                                                    1. https://dl.acm.org/doi/pdf/10.1145/321127.321128
                                                                                                                                                                                                                                                                                                                                                                                                                                      1. https://deepblue.lib.umich.edu/bitstream/handle/2027.42/5575/bac4292.0001.001.pdf?sequence=5&isAllowed=y
                                                                                                                                                                                                                                                                                                                                                                                                                                        1. also see signals and boundaries
                                                                                                                                                                                                                                                                                                                                                                                                        2. hodden order: aggregate agent: behavior depends on the interactions of the components agents within the network. may again be aggretagetd to add new hierarchial levels. adaptive agents
                                                                                                                                                                                                                                                                                                                                                                                                          1. aggregation: aggregate objects into categories and treat them as equal, make new ones by combination. we decide which details are irrelevant for question of interest and ignore them
                                                                                                                                                                                                                                                                                                                                                                                                            1. can act as meta agents: described with aggregate first sense properties . 1. aggregate agent -> 2. emergent aggregate property of system
                                                                                                                                                                                                                                                                                                                                                                                                              1. systems use tag to manipulate symmetries. lets us filter out irrelvent info. it is breaking symmetries and changing object. tags let us selectively choose. 1. tag condition 2. tag by system
                                                                                                                                                                                                                                                                                                                                                                                                                1. systems are nonlinear in their issues and equations needed eg pg 20
                                                                                                                                                                                                                                                                                                                                                                                                              2. have internal models : eliminate details so patterns are emphasized. agent must select patterns from input and convert those into internal changes in internal structure and these changes must let the agent predict consqeuqcnes when the pattern or a similar one is seen again. models have temporal consequnces within agent.
                                                                                                                                                                                                                                                                                                                                                                                                                1. tacit models: prescribes a current actions under implicit prediction of a desired future state. eovlutionary timescale.
                                                                                                                                                                                                                                                                                                                                                                                                                  1. overt model: basis for explicit but intenral explorations of alternatives a process called lookahead eg mentally imagining possible actions
                                                                                                                                                                                                                                                                                                                                                                                                                  2. internal model =/= all internal elements within agent. it is a model if by looking at it we can infer something about its environment. have nonlinearities, ie resource flows that use cycles arent linear
                                                                                                                                                                                                                                                                                                                                                                                                                    1. models let us infer something about the thing modeled
                                                                                                                                                                                                                                                                                                                                                                                                                      1. building blocks of model: we distilla c omplex scene into indivudla parts. eg we analyze a complex scene by looking for elements we already tested for reusability via natural selection. reusbaility = repetition. we repeatedly use the blocks even if they dont appear twice in the same combo. we can see the blocks as numerical objects and calaculate the possible number of combinations. ex of scientific activity.we choose the appriotate choice from the set of blocks. ex: a face made by assigning numbers to features and combining
                                                                                                                                                                                                                                                                                                                                                                                                                        1. gain advantage when we reduce blocks at one lvl to interactions and combos of blocks at a lower lvls: the laws of the higher lvls derive from the laws of the lower blocks. blocks impose regularity on a seen complex worlf. we have a basic LOA and can change our mental images prn. we deal w situations by decomposing them and evoking rules we know from previous; pattern. expiernece is a pattern. we use the blocks to model the situation in a way that suggests approtpaite conseqiences and actions.
                                                                                                                                                                                                                                                                                                                                                                                                                  3. agents need rules: must be a single syntax to describe all within the system. it must provide for all interactions among agents. thre must be a procedure for adaptively mofying the rules. ie if then functions above.
                                                                                                                                                                                                                                                                                                                                                                                                                    1. agent senses environment via differnt stimuli. stimuli can have tags and touch the agents. agent must filter info first.. enviro conveys info to agent via detectors, can turn on and off. binary device that conveys one but of info. this is to describe the way agents filter; not all cas agents have detectors. via the binary detec. we can use standard msgs ie binary strings to repesent info selected by agent
                                                                                                                                                                                                                                                                                                                                                                                                                      1. agent reacts to detectors via effectors, actionss. each action has an elementary effect on enviro when activiated by msg. effectors decode msgs to cause actions in environemnt.,. effectors invert procedure used by detectors to encode activity in the msgs. eff=output
                                                                                                                                                                                                                                                                                                                                                                                                                        1. 1. detectors 2. perofmrace system 3. effectors

                                                                                                                                                                                                                                                                                                                                                                                                                Medienanhänge

                                                                                                                                                                                                                                                                                                                                                                                                                Zusammenfassung anzeigen Zusammenfassung ausblenden

                                                                                                                                                                                                                                                                                                                                                                                                                ähnlicher Inhalt

                                                                                                                                                                                                                                                                                                                                                                                                                System Analysis
                                                                                                                                                                                                                                                                                                                                                                                                                R A
                                                                                                                                                                                                                                                                                                                                                                                                                SQL Server 2014 - Part 2
                                                                                                                                                                                                                                                                                                                                                                                                                Paul Doman
                                                                                                                                                                                                                                                                                                                                                                                                                SQL Server 2014 - Part 1
                                                                                                                                                                                                                                                                                                                                                                                                                Paul Doman
                                                                                                                                                                                                                                                                                                                                                                                                                A2 Database Systems
                                                                                                                                                                                                                                                                                                                                                                                                                s010380
                                                                                                                                                                                                                                                                                                                                                                                                                Types and Components of Computer Systems
                                                                                                                                                                                                                                                                                                                                                                                                                Jess Peason
                                                                                                                                                                                                                                                                                                                                                                                                                Unit 1 - Key Vocabulary
                                                                                                                                                                                                                                                                                                                                                                                                                Shannon Anderson-Rush
                                                                                                                                                                                                                                                                                                                                                                                                                Construcción de software
                                                                                                                                                                                                                                                                                                                                                                                                                CRHISTIAN SUAREZ
                                                                                                                                                                                                                                                                                                                                                                                                                ICT systems and their usage
                                                                                                                                                                                                                                                                                                                                                                                                                m.lucey10
                                                                                                                                                                                                                                                                                                                                                                                                                European Politics
                                                                                                                                                                                                                                                                                                                                                                                                                Emily Fenton
                                                                                                                                                                                                                                                                                                                                                                                                                HUMAN BODY
                                                                                                                                                                                                                                                                                                                                                                                                                Top Cat
                                                                                                                                                                                                                                                                                                                                                                                                                Operating Systems
                                                                                                                                                                                                                                                                                                                                                                                                                Tiffany Biles