💾 Archived View for siiky.srht.site › wiki › book.wolfgang_reisig.understanding_petri_nets.gmi captured on 2024-08-18 at 17:59:16. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2024-02-05)

-=-=-=-=-=-=-

Wolfgang Reisig, "Understanding Petri Nets"

siiky

2023/09/14

2024/01/15

2024/01/15

book,programming,petri_nets

https://link.springer.com/book/10.1007/978-3-642-33278-4

https://dx.doi.org/10.1007/978-3-642-33278-4

ipfs://bafykbzacebpvzk5omnc5rqj7iraewyj6xi5ycfxley5blcb3644begbqgnqyu/94a754088b9927648434c9f6110689c1

Works citing this book

Chapter 1: An Example

# Further Reading
Someone who models a system does not always immediately think about components that behave like places and transitions of a Petri net. Usually, a system is first theoretically broken down into abstract components, which are later on systematically refined. All modeling techniques based on Petri nets make use of refinement and composition. In connection with colored nets [38], hierarchical concepts are introduced. Girault and Valk [29] also recommend a refining approach in their extensive book on system design with Petri nets. Introductory texts of various kinds can be found in the anthologies of the latest two Advanced Courses on Petri Nets [18], [70].
# How It Began
In the late 1950s, Carl Adam Petri, at the time a research associate at the Department for Instrumental Mathematics at the University of Bonn, Germany, thought very pragmatically about the implementation of recursive functions. After all, for such functions, it is generally not possible to predict how much space their calculations will consume. If the available resources are insufficient for a calculation, the data processing system should be expandable, to continue the calculation. This is more efficient than starting over with a larger system.
So Petri sought a system architecture that can be expanded indefinitely. Such an architecture does not have any central components, most of all no central, synchronizing clock, because every expansion enlarges the system in space. Connections to the clock would become longer, and longer cycles would demand lower clocking frequencies. At some point, the limitations of the laws of physics would have to be broken in order to further expand the system. Therefore, such an architecture inevitably has to make do without any synchronizing clocks.
In his famous dissertation “Kommunikation mit Automaten” (Communication with Automata) [56], Petri shows that it is actually possible to construct such an indefinitely expandable, asynchronous system architecture. It incorporates locally confined components communicating with each other via local interfaces.
Actions with locally confined causes and effects are the central idea of the nets proposed by Petri in [56]. Termed “Petri nets”, they became one of the most popular concepts of computer science. Only later did Petri start to use a graphical representation. Thus, the first text on Petri nets does not contain a single Petri net!

pp. 11-12

[56] Carl Adam Petri, Kommunikation mit Automaten

[29] Claude Girault, Rüdiger Valk, Petri Nets for Systems Engineering

[29] Claude Girault, Rüdiger Valk, Petri Nets for Systems Engineering

Unfortunately I still don't know german to be able to read [56].

Chapter 2: The Basic Concepts

[35] D. Harel and R. Marelly, "Come, Let's Play: Scenario-Based Programming Using LSCs and the Play-Engine"

[35] D. Harel and R. Marelly, "Come, Let's Play: Scenario-Based Programming Using LSCs and the Play-Engine"

# A Universal, Expandable Architecture
In his dissertation, Petri designed a universal computer architecture that can be expanded an arbitrary number of times. We will explain this idea using the example of a finite, but infinitely expandable stack.

p. 24

Chapter 3: Common Special Case: Elementary System Nets

About elementary system nets, i.e., the "plain tokens" nets.

[61] Lutz Priese, Harro Wimmel, "Theoretische Informatik: Petri-Netze"

[70] Wolfgang Reisig, Grzegorz Rozenberg, "Lectures on Petri Nets"

Chapter 4: Sequential and Distributed Runs

What the fuck happened here?

§ 4.7 Causal Order: distributed runs are able to represent causal order.

class.lsds.gmi

In Nr, the place E can be seen as a model of a resource that is used, but not used up. This forces the two actions of a and b into one of two possible orders.
If an action α generates tokens that are used by another action β, then α causally precedes β. This "causally precedes" relation is, of course, transitive: if α precedes β and β precedes γ, then α also precedes γ. Furthermore, causally precedes is irreflexive: no action causally precedes itself. Thus, causally precedes is a strict partial order.
(...) Simultaneity is transitive; independence is not.

p. 46

Two aspects of distributed runs are particularly important (...): (...) the control of the state space explosion during verification through model checking [49], [77].

p. 48

[10] E. Best, César Fernández, "Non-Sequential Processes"

[23] Javier Esparza, Keijo Heljanko, "Unfoldings: A Partial-Order Approach to Model Checking"

[49] K. L. McMillan, "Using unfoldings to avoid the state explosion problem in the verification of asynchronous circuits"

[54] Mogens Nielsen, Gordon Plotkin, Glynn Winskel, "Petri nets, event structures and domains"

[77] Antti Valmari, "The state explosion problem"

# Read and Write vs. Give and Take
(...)
This prompts us to try to translate programs into Petri nets and vice versa: each variable z in a program corresponds to a place pz in a system net N, and the current value w in z corresponds to a token in pz. The reading of z is simulated with a transition t and a loop between pz and t:
[Petri net drawing]
In a concurrent program with two processes, a second process in N would generate a second loop between pz and a second trasition t'.
[Petri net drawing]
In each run of N, the transitions t and t' can occur arbitrarily often, but always one after another and never independently. For example, t' may occur infinitely often and block t. This contradicts the the assumption that processes can read a variable independently without interfering with each other. If the updating of the variable z is also modeled as a loop between a transition t'' and the place pz, the updating transition t'' can be blocked by the reading transition t, in contradiction to the common assumption of concurrent programming in which the updating of a variable is never blocked by the reading of the variable. This ovservation has far-eaching consequences, some of which will become apparent in the case study of the mutual exclusion system in Chap. 20.
The translation of a system net N into a program P synchronizes the transitions of N into one or more control flows. The structure
[Petri net drawing]
of N generates (among others) a distributed run in which r and t occur independently from each other. A corresponding program would have to organize the nondeterminism between r and s and between t and s and thus force r and s under a central control.
Finally, programming languages and many other operational modeling techniques allow for multiple control flows. They are, if necessary, dynamically created or terminated, but rarely each the necessary flexibility to model, for example, the solution to the kindergarten game in Fig. 4.13.

pp. 48-49

Chapter 5: Scenarios

Scenarios are sort of like common or repeatable sequences of actions ("firings", more or less). For example, in the cookie machine net, selling a cookie box may be a scenario.

(...) To form a complete scenario, the token sin the counter, the cash box and the storage would have to match in the outset and at the end. To also cover such cases, we will weaken the definition of a scenario of a system net N:
[Fig 5.4]
[Fig 5.5]
A scenario for a marking M of N is a finite partial distributed run K of N in which a part of °K and a part of K° represent the marking M. (...)

pp. 54-55

Thinking in scenarios can significantly simplify and deepen our understanding of a complex system.

p. 55

[16] Jörg Desel, Gabriel Juhás, "“What Is a Petri Net?” Informal Answers for the Informed Reader"

[35] D. Harel and R. Marelly, "Come, Let's Play: Scenario-Based Programming Using LSCs and the Play-Engine"

[35] D. Harel and R. Marelly, "Come, Let's Play: Scenario-Based Programming Using LSCs and the Play-Engine"

Harel greatly extends this idea for Live Sequence Charts [35] and integrates it into a tool.

p. 56

Chapter 6: Further Notations for Elementary System Nets

Montanari and Rossi [52] propose context places. A context place can be accessed by multiple transitions simultaneously without hindrance. They can, for instance, be used to solve the problem described in the postscript to Chapter 4: Read and Write vs. Give and Take.

p. 63 § 6.3 Real Extensions

[18] Jörg Desel, Wolfgang Reisig, Grzegorz Rozenberg, "Lectures on Concurrency and Petri Nets"

[52] Ugo Montanari, Francesca Rossi, "Contextual nets"

[70] Wolfgang Reisig, Grzegorz Rozenberg, "Lectures on Petri Nets"

[75] Rüdiger Valk, "Self-modifying nets, a natural extension of Petri nets"

[76] Rüdiger Valk, "Object Petri Nets"

Chapter 7: The Synthesis Problem

Fig. 7.1 The light/fan system as a state automaton

Fig. 7.2 The light/fan system as an elementary system net

Fig. 7.2 shows the system as an elementary system net N. It has four places describing the local states as well as four transitions, one for each action of the system.
The representation as a system net clearly describes the cause and effect of each action. (...)

p. 66

[Marking graph] G is isomorphic to [a state automaton] Z if every node k of G maps to exactly one node k' of Z such that:
* the initial marking of G is mapped to the initial state of Z,
* h--t-->k is a step in G iff h'--t-->k' is a step in Z.
To summarize: a system net N solves the synthesis problem of a given (abstract) state automaton Z if the marking graph G of N is isomorphic to Z.

p. 67

There existe state automata Z whose synthesis problem cannot be solved by any 1-bounded elementary system net.

p. 68

p.69 defines "R receives π", "R dispatches π", and "R contains π". With that, R is a region of Z if for each edge labeling t of Z:

A region R of Z is minimal if no proper subset of R is a region of Z.

p. 69

pp. 69-70 define how to create a system net N of Z.

Theorem 1 (Synthesis Theorem). If the synthesis problem of a state automaton Z can be solved by a 1-bounded elementary system net, then the system net of Z is a solution.
With this, it is possible to solve the general synthesis problem: one constructs the system net N of a given state automaton Z and from this the marking graph G of N. If Z and G are isomorphic, N obviously solves the synthesis problem of Z. Otherwise, the synthesis problem of Z cannot be solved by any 1-bounded elementary system net.

p. 70

(...) Bernardinello[8] confirmed the assumption that the minimal regions suffice. (...)
Many authors took part in generalizing the 1-boundedness in Theorem 1 to generic elementary system nets and in including even inhibitor edges. With this, the synthesis problem of much more general graphs becomes solvable. (...)

p. 72

(...) The fact that this is possible marks Petri nets as a "very natural modeling technique" for distributed systems.

p. 73

Chapter 8: Composition of Nets

p. 75 mentions "interfaces", "open nets"

Fig. 8.1 Three interface nets

pp. 76-77 an interface net is a net with an interface I⊆P∪T. The interface has the places and transitions shared with other nets through the interface border. Elements of the interface are called interface elements, and the others are inner elements. After composing two interface nets, the interface elements they have in common become inner elements: I = (I1∪I2)\(I1∩I2). The rest become the interface of the composed interface net.

p. 77 composition of two interface nets is commutative; but it is associative only if each interface element of all the nets is shared by at most 2 nets.

Theorem 2 (Composition Theorem for Interface Nets). For i=1,2,3, let Ni be interface nets with the interfaces Ii.
* (a) N1+N2 = N2+N1.
* (b) If I1∩I2∩I3=∅, then (N1+N2)+N3 = N1+(N2+N3).

p. 77

Theorem 2(b) is used to define sets of interface nets as associative.

Theorem 3 (Associativity Theorem). Let M be a set of pairwise communicating interface nets. Then M is associative.

p. 78

Communicating nets often only have in- and out-places in their interfaces, that is, no transitions and no places with transitions in both their pre- and post-sets. Such nets are often called open.

p. 78

From this it follows that open nets cannot have shared state!

p. 79 what?

Theorem 4 (Theorem on Minimal Open Subnets). Let N be a net and let N1, ..., Nk be the minimal open subnets of N. Then:
* (a) {N1, ..., Nk} is associative.
* (b) N = N1 + ... + Nk.

p. 79

In this chapter, we have defined the composition of two nets via shared elements. Alternatively, it is possible to assign labels to elements and to define composition via equally labeled elements.

p. 80