2. The Arbitrary Oscillator
To build the equation (
1) we need to introduce a particular cellular automaton.
Figure 2. The Arbitrary Oscillator.
In the figure a “bug” can move on a horizontal x axis exclusively between two stable positions points: A and B. The decision to move (or to stay) is free for the “bug”, therefore not predictable. In the boundary positions the “insect” encounters the outside world that can determine a “safe” condition (food, reproduction, etc.) or a “deadly” condition (predators, danger, internal disease, etc.). These occurrences are also not predictable. At each step of time the “bug” decides what to do: stay/move (say “0” for stay and ”1” for move). A “path-space” for this Arbitrary Oscillator (ArbO) can be envisaged. It includes a space and time coordinate and a “d” path index, collecting all the possible decisions sequence.
Figure 3 shows the scheme. The figure depicts a sequence of four-time steps with binary decisions labeled “0” or ”1” and amounting in 16 total possible paths (or “orbits”). The sequence starts at a common initial spatial position. One can note, at the sides of the figure, the “extreme” decisions possibilities: a) {1,1,1,1} -> ever change or b) {0,0,0,0}-> stay still.
Figure 3. Path-space of the Arbitrary Oscillator for 4 steps of time decisions.
The same situation is shown in
Figure 4 where the spatial coordinate is omitted. At each decision step, the rectangular “cell” splits into two subsequent underlying cells. By convention, the left new cell can represent a “no move” or “0” decision, while the right new cell results from a “move” or “1” decision. The total final possibilities are 16 different sequences classified by the “d” index shown at the bottom of the last group of cells. The d index label can follow any coding preferred rule. The cycle starts at an initial zero step, common to all subsequent paths, where the next decision is drawn, resulting in the two different cells of the first step and so on.
Figure 4. Possible choices sequences.
In the following
Figure 5, we introduce -for our ArbO- the risk of an End of Life (EoL) event. In this case, the crossed cells mean the occurring of a EoL event. Of course, in this case no subsequent cells can be expected and the final “d” sequence is truncated. The EoL events are randomly distributed and can arise at any available decision cell, one per cell, on one or many cells in the level.
With the EoL event possibility, an end of the full “life cycle” of the ArbO can be considered as per
Figure 6. Here the four- step case is again presented. In the figure all the “path-space” is covered by sequences terminating with and EoL event. Note that, for the four-step case in this example, we find a five different path history.
Figure 6. A fully ended life cycle of the arbitrary oscillator.
3. The Arbitrary Oscillator Equations
The above processes can be mathematically described assuming that:
i. The EoL events cover only one cell per event. Their occurrence is not predictable, in number and position, for any time step level and over the available level cells.
ii. The number of possible ArbO decision sequence paths is limited to a max Total Cases (TC).
The (i) statement formalize the ArbO behavior described in the previous Section 2. The (ii) statement means simply that the step sequence cannot be “perpetual” i.e. unlimited. Consider indeed that after 116 steps (without EoL events) the number of possible sequences in the path space becomes about . In this case -suppose be L a fixed length in the d space dimension- the mean distance between the paths will be L*10-35. With (i) and (ii) assumptions, we can then define the following relations and conditions:
We define m
r as the number of EoL events at the step r and v
r the remaining “safe” decision cells at the same step level. Referring e.g. to
Figure 5 at step 2 level we have
= 1 and
= 3. The factor of 2, in the Eq. (
2), comes from the “binary” decision rule. Solving the recursion equation (
2) vs the m variables, we find:
From the statement (ii) it follows that any sequence of decisions must end with an EoL event. The total number of these terminated paths must be TC. Then:
It is easy to demonstrate that:
The presence of the v
r variables in the above relations can be eliminated if we consider the “final” level of the life cycle of our ArbO: there, it must be
. Then, factoring the term 2^r and renaming t index as r index, the eq. (
4) leads to the (
7) equations, with equivalent expressions:
(7)
The eq. (
5), (
6) and (
7) build up a system of diophantine equations that we call “S
TC system” (see Ref.
| [8] | SQU Systems, Web site managed by the author and devoted to diophantine S equation systems. https://squ-systems.eu |
[8]
). The {m
r, v
r} solutions number for these systems varies with TC and quickly grows exponentially. No explicit solution formula of an S
TC system is known to the author. By computer analysis one can retrieve the solutions for TC values of some tenths, in a reasonable machine time. For example, with TC = 26, the number of solutions is computed as 565168. With TC = 5 and Rmax = 4, the S
5 system equations and solutions are (last solution as per
Figure 6):
8m1 +4m2 +2m3 +m4 =16
m1 + m2 +m3 +m4 =5
{{m1 → 0, m2 →3, m3 →2, m4 →0, v0 →1, v1→2, v2→1, v3→0, v4→0},
{m1 → 1, m2 →0, m3 →4, m4 →0, v0→1, v1→1, v2→2, v3→0, v4→0},
{m1 → 1, m2 →1, m3 →1, m4 →2, v0→1, v1→1, v2→1, v3→1, v4→0}}
Note that, given a set of solutions {m
r}, the corresponding {v
r}set is obtained by eq. (
4). Finally, it can be shown that, defining
as per eq. (
8), the eq. (
9) follows.
Note that the set can be viewed as the set of the ArbO “critical decisions” whose result can be a safe or “deadly” end.
4. The Application of the Fermi Statistics Method
There are some formal analogies between the arbitrary oscillator model above described and the quantum models studied in the last century by E. Fermi to derive its statistics for ideal gas of molecules and electrons. In both cases we deal with each other indistinguishable objects, that can be arbitrarily allocated into available empty positions over certain predefined levels and imposing only one item per position. In both cases, any possible allocation must comply with some boundary conditions. In the next rows we follow the approach of the great scientist even using the same notations and rationale. This Fermi method is well described in his book “Molecules, Crystals and Quantum Statistics”, Ref.
| [9] | E. Fermi, “Molecules, Crystals and Quantum Statistics”, W. A. Benjamin, 1966, pag. 264 and subseq. |
[9]
. The rationale is as follows:
- we seek for the most probable solution of a S^TC system
- to achieve this, we consider the number of ways to distribute some m events over the available Q cells at each step level
- the most probable solution will have the maximum number of the above said ways, say ∏
- we look for this maximum, looking at the maximum of Log(∏), that is the same thing, using the method of Lagrange multipliers
- finally, we consider the boundary conditions to fix the unknown parameters
Considering our ArbO model, the above said ∏ value will be:
where the mr and, accordingly, the , are solutions of a S-system and the symbol (, represents the binomial coefficient. If we pass to the natural logarithms Log ( ) function and using the Stirling approximation formula, we will have:
we now consider the eq. (
4), where we extract the
mr term from the ∑ symbol, obtaining:
(11)
now, remembering the (
8), we have:
We see that, in the above eq., the variable is independent from mr, thing that is reasonable since = 2 . Now we can move on to the search of the maximum of the Log(∏) searching the maximum of the expression:
where a, b are undetermined constant coefficients (according to Lagrange multipliers method), and C1, C2 are the first member of eqs. (
5) and (
7), also constant in total value. We search then for the null condition of the derivative vs
mr of the expression:
(13)
After some mathematical steps, we find -as the null condition- the expression:
(14)
Recalling the (
12), we have:
To find the values of a and b coefficients, we sum over r and, imposing the boundary conditions (
5), (
9) and (
10), we obtain, after some algebra, the following expression leading to the eq. (
1) presented in the first section:
If we introduce the parameter rF= log(TC-2)/log(2), we obtain the following set of equations that represent - in recursive way - the most “probable” solution set of a diophantine S system. We use here the bold character to distinguish this particular solution set to the other possible solutions.
(15)
5. Properties of the Most Probable Solution
From the eq.s (
15) we see that:
The form of the (
16) leads to a Cumulative Logistic Distribution shape. This is shown in the following
Figure 7 for TC=100 000, where the dashed line intercepts the curve at 0.5 level and the r axis at rF point, with rF=16.6096.
Figure 7. (mr /Qr) Logistic shape for TC = 100000.
The above curve says that, with 100 000 decisions cases, the most probable {
mr } solution of the S system defined in eq. (
7) will have the initial m values at zero, then, quite soon around rF, the m values become similar to the v values and -finally- all the m (EoL) values will saturate the available Q levels, meaning a life cycle end for the cellular automaton. It is seen that, even if Rmax=100 000-1, the most probable solution set will involve a number of non-zero variables limited to few tenths. This is indeed explained by the logarithmic dependence of rF vs TC (see eq. (
15)). It is noteworthy to consider that there will exist solutions with about 100 000 m non zero variables, e.g. like the banal solution {
m1 = 1,
m2 = 1,...,
m99998 = 1
m99999= 2 } } but these will have far less probability to arise. Let’s now consider the following
Table 1, where, for the TC=100 000 case, the values of the m, v, Q (rounded to integer value) are presented together with the mortality figures for years 1974 and 2019 in Italy (ISTAT dx data and 2019 qx data), Ref.
. The calculation for the m, v, Q is done by computer, via iteration across the r steps, according to eq (
15)
Table 1. m, v, Q values with ISTAT 1974 & 2019 data for 100000 Total Cases.
Years interval | r | m | v | Q | ISTAT -2019 1000 x qx | ISTAT-2019 dx | ISTAT-1974 dx |
Up to 4 years | 1 | 0 | 2 | 2 | 3,3453 | 335 | 2735 |
5-9 | 2 | 0 | 4 | 4 | 0,36563 | 36 | 180 |
10-14 | 3 | 0 | 8 | 8 | 0,43792 | 44 | 170 |
15-19 | 4 | 0 | 16 | 16 | 0,99425 | 99 | 334 |
20-24 | 5 | 0 | 32 | 32 | 1,429 | 142 | 385 |
25-29 | 6 | 0 | 64 | 64 | 1,63059 | 162 | 368 |
30-34 | 7 | 0 | 128 | 128 | 1,9993 | 198 | 505 |
35-39 | 8 | 1 | 255 | 255 | 2,86143 | 283 | 686 |
40-44 | 9 | 3 | 507 | 509 | 4,69355 | 463 | 1128 |
45-49 | 10 | 10 | 1003 | 1014 | 7,33727 | 721 | 1863 |
50-54 | 11 | 40 | 1966 | 2007 | 11,46247 | 1118 | 2799 |
55-59 | 12 | 155 | 3778 | 3933 | 18,46051 | 1780 | 4470 |
60-64 | 13 | 572 | 6984 | 7556 | 29,60728 | 2801 | 6265 |
65-69 | 14 | 1966 | 12002 | 13968 | 47,28093 | 4341 | 9089 |
70-74 | 15 | 5924 | 18079 | 24003 | 75,57279 | 6611 | 12542 |
75-79 | 16 | 14315 | 21843 | 36158 | 130,66618 | 10566 | 16581 |
80-84- | 17 | 24780 | 18906 | 43686 | 227,36915 | 15984 | 17535 |
85-89 | 18 | 27370 | 10441 | 37811 | 404,2388 | 21956 | 13839 |
90-94 | 19 | 17537 | 3345 | 20882 | 621,66989 | 20117 | 6806 |
95-99 | 20 | 6107 | 582 | 6690 | 798,55747 | 9776 | 1591 |
100-104 | 21 | 1112 | 53 | 1165 | 931,49213 | 2297 | 127 |
105-109 | 22 | 104 | 2 | 106 | 986,30673 | 167 | 2 |
110-114 | 23 | 5 | 0 | 5 | 998,26306 | 2 | 0 |
115-119 | 24 | 0 | 0 | 0 | 999,85919 | 0 | 0 |
Total | | 100001 | 100000 | 200002 | | 99999 | 100000 |
The
Table 1 data for m, v, Q are graphically shown in
Figure 8 here under, utilizing a suitable computer interpolation algorithm to generate continuous lines. The dashed line starts at rF and intercepts -as expected- the crossing of m and v curves. It can be demonstrated that the peaks of the three curves conserve their relative allocation around rF value and between themselves, for any rF value.
Figure 8. Interpolated m, v, Q curves.
7. The Continuous Model Equations
7.1. Introducing the Continuous Functions
The above (
15) equations are finite difference equations, that require -for a numerical evaluation- a recursive computation. We want to transform these in explicit expression of the m, v, Q variables as functions of time and TC parameter. This can be useful for a better model handling and simulation. We look therefore for the differential equations possibly associated to the (
15) equations and their solution. In doing this we introduce the
m(t
), v(t
), Q(t
) functions of a continuous time t variable such that:
The above conditions imply that the continuous curves
m, v, Q pass through the discrete points (
r,m
r), (
r,v
r), (
r,Q
r) respectively, individuated by the recursive equations (
15) (*). We keep the bold face of the function character meaning that we are again handling the particular (most probable) solutions set of (
15) equations. Since the ArbO model is essentially a discrete binary model, the above defined continuous functions do not have a "physical" meaning (like e.g. number of choice cells, etc) but they can have a statistic meaning if we imagine a plenty of ArbO objects starting at the same time and evolving independently. In this case the m, v, Q curves can quantify the mean number of the corresponding random choices and events along a sliding time interval.
Considering the equation (
2), we know that, in a continuous
t domain and with
m(t) = 0, the equation becomes equal to
v'= kv, giving the solution
v(t) = 2
t, with k = Log(2). Extending the analysis including
m(t) ≠ 0, we consider the differential equation:
This equation still comes to (
2) if m(t) = 0 or if m(t) = v(t) = const. For the general case m(t) ≥ 0, we try some possible expressions of the g constant looking for a possible best fitting solution vs the points calculated with our recursive model. We find that a possible good fitting is reached with a value such that (
Assumption (iii)):
giving the differential equation:
7.2. Solving the Equations
Equation (
17) involves two unknown time functions
v(t) and
m(t). Recalling relation (
16), which has no recursive form, we can extend this relation to the continuous domain. We thus obtain a dependence between v and m such that:
Using such a relation, we can substitute the v terms in the (
17) diff. equation thus obtaining a diff. equation in only one unknown function
m(t). To avoid excessive reader burden, we jump directly to the computed solutions of the combined (
17) and (
18) equations. For the solution computation we define a boundary condition such as, in consistency with the eq. (
7):
In Ref.
| [8] | SQU Systems, Web site managed by the author and devoted to diophantine S equation systems. https://squ-systems.eu |
[8]
more details will be available. These computations are performed with the aid of a well-known commercial mathematical computer application. These solutions have the following form:
The g, k terms are numbers while c1 and rF are constant depending only from the TC parameter.
With these assumptions the (
19) and (
20) equations groups can be used to characterize the particular ArbO model presented in the Sect. 4 and 5, giving
m, v, Q as functions only of time and of the TC parameter.
7.3. The Continuous qx(t) Mortality Probability Definition
In the case of the discrete ArbO model we compute the qx probability values with the same method of the standard demographic tables, Ref.
| [3] | A. Racco, M. Argollo de Menezes and T. J. Penna “Search for an unitary mortality law through a theoretical model for biological ageing”, http://arxiv.org/abs/adap-org/9709002v1 |
| [4] | M. D. Pascariu, A. Lenart & V. Canudas-Romo (2019) “The maximum entropy mortality model: forecasting mortality using statistical moments”, Scandinavian Actuarial Journal, 2019: 8, 661-685, https://doi.org/10.1080/03461238.2019.1596974 |
| [5] | A. Boulougari, K. Lundengård, M. Rančić, S.Silvestrov, S. Suleiman & B. Strass (2019) “Application of a power-exponential function-based model to mortality rates forecasting”, Communications in Statistics: Case Studies, Data Analysis and Applications, 5: 1, 3-10,
https://doi.org/10.1080/23737484.2019.1578705 |
| [6] | S. J. Clark “A General Age-Specific Mortality Model With an Example Indexed by Child Mortality or Both Child and Adult Mortality”, Demography. 2019 June; 56(3): 1131-1159.
https://doi.org/10.1007/s13524-019-00785-3 |
| [7] | ISTAT Web site, http://dati.istat.it |
[3-7]
:
that means to compare the current r interval mortality figure mr with the survived subjects entering at the beginning of the r interval, i.e. the denominator value in the above formula. In our case the mr figures are given by the ArbO algorithm, while the standard demographic tables use the dx values sampled in the demographic survey. For the ISTAT data we use the qx x1000 values already available in the standard tables and use this data as a reference for our model, just like the dx data, also available in the demographic tables. Note again that our qx calculation is an independent process from demographic data, to which it can be compared assuming a same sample TC value and a suitable r scale such that x [years] = 5 r. With these assumptions in mind and the 7.2 subsection equations, we can also define the qx(t) continuous function as:
where N(t) is the survival subjects’ number at instant t (Assumption (iv)):
(21)
This equation comes from both the above discrete definition of qx and the introduction of a "trimming" term with a gg (≠ g) constant. The trimming term is due to the need to compensate the numeric precision when passing from the Sum function to the Integral function. The gg value will result, in practical calculations, as gg = 0.5. In Ref.
| [8] | SQU Systems, Web site managed by the author and devoted to diophantine S equation systems. https://squ-systems.eu |
[8]
an explicit form of
N(t) is given. This form has been obtained solving the integral in (
21) using the (
19) formulas.
7.4. The Simulated Curves
With the above foundations, we show some graphics (see
Figure 11 and subs.) with the defined
m(t), v(t), Q(t), qx(t) functions over-imposed to points computed with the recursive algorithm based on (
15) relations, when a TC value of 100 000 is assumed. The t values run continuously along the r integers steps range. The dots almost coincide, except for minor deviations, with the
m(t) curve. We see that the interpolation shapes of the numerical data given in
Table 1 with
Figure 8 are quite confirmed, and also that the area under the curves results to be close to the expected sizes (eqs. (
5) & (
9)). The “dotting” of Q curve is omitted to avoid graphic burden to the Figures.
Figure 11. Comparison between the continuous curves m,v,Q and the recursive ArbO discrete data, TC=100 000.
Figure 12. Comparison between the continuous qx curve and the ArbO discrete data, TC=100 000.
We see also from
Figure 12 that the qx(t) simulated curve fits well with the computed ArbO model qx dots (same blue dots on the
Figure 10). The shape of the theoretical qx(t) confirms also a flattening at high ages.
7.5 The m, v, Q, qx Curves Features vs Different TC Parameter
Until now we considered a reference fixed value of TC = 100 000. It could be interesting to look at the effects on the curves of different TC’s. We found that the curves confirm the same shape and aspect, as per
Figure 13 and
Figure 14, with e.g. TC =10 000 and 1000 000 respectively. We also see that the curves slide along the x-axis due to their different rF value (located under the crossing of m and v curve) and that these have accordingly different areas as per the TC specific value. A similar analysis can be done for the theoretical qx(t) magnitudes resulting in the
Figure 14. These show a convergence at advanced ages of the qx probabilities. The wide span between the curves is due to the three order of magnitudes considered for the TCs. All these figures show theoretical -continuous & discrete- data coming from the ArbO model.
Figure 13. Comparison between the continuous curves m,v,Q and the recursive ArbO discrete data, TC=10 000.
Figure 14. Comparison between the continuous curves m,v,Q and the recursive ArbO discrete data, TC=1 000 000.
Figure 15. Theoretical qx(t) values computed with the ArbO model for different TC parameter values.
It is also interesting to consider the question of how the "peaks" of these curves can change along different TC values. This is done calculating the curves null derivative position on the r step axis (giving a fractional real r value). The result is shown in Figure 16, where the positions of the curves peaks (presented in years units) are plotted vs TC range. It is seen that the relative peaks and rF level positions distances result constant. The figure shows also the logarithmic shape of the bundle since rF= log(TC-2)/log(2). The Figure 16 includes also a line called "lifespan" defining the position of the r value when the m curve reach the 10% (at the "older ages" right side) of the peak value. It happens that this "delta" value to the m peak is also constant resulting in about 2.77 r intervals (i.e. about 13.8 years from the max mortality peak position).
Figure 16. Peak positions for the continuous curves m,v,Q vs TC values.
7.6. A Possible Interpretation of the Real Demographic Mortality Curves
With the above theoretical foundations, if we look to
Table 1 and
Figure 9, we see that the demographic mortality tables show two different peaks for ISTAT1974 and ISTAT2019 data respectively. Now, considering the ISTAT2019 case, we assume that the actual data curve is composed of the mixture of two groups of subjects with different TC values coexisting in the demographic survey. Figures 17 and 18 show the total combined effect and the decomposition into individual curves to be summed to obtain the total effect. For this exercise we consider two TC values groups with different percentage weights on the total normalized area of 100 000 events. These
wj weights can be defined as follows, where the
pj numbers represent the share portions of the groups “labeled” as TCj that constitute the overall total T of subjects; these
wj values therefore will “normalize” the total sample under analysis to 100 000 cases:
thus the “mixed” mortality profile will be given, in the case of two components, by:
Figure 17. ISTAT2019 mortality data over a curve combined by sub-groups with different TC.
Figure 18. ISTAT2019 mortality data and the two sub-groups curves to be summed to give the combined total curve.
The same approach is applied to the more elongated curve of ISTAT1974. With the aid of three combined curves with different TC values and groups weights we give the
Figures 19 and 20.
Figure 19. ISTAT1974 mortality data over a curve combined by sub-groups with different TC.
Figure 20. ISTAT1974 mortality data and the three component sub-groups curves to be summed to give the combined total curve.
From the previous simulation test we see that one possible explanation for the shape of the real mortality demographic curves is the overlap of mortality from different component groups. The same as above considerations can be extended to the qx probability as result of a mix of sub-groups components.
For this task we define the mixed qx curve when in the presence of e.g. three major components, as:
The formula is proposed for the three components case of
Figure 19 (ISTAT1974 data) using the same “weights”. The
m(t,TC
) and
N(t,TC
) functions are those obtained before with (
19), (
20), (
21) applying the relevant TC parameter. The result is shown in
Figure 21 hereunder, where we found a good fitting of the curves from ages higher of about 55 years (r = 11). In the
Figure 21 we also added the result of the same exercise for ISTAT2019 data with a
qxmix(t) function calculated as above but with only two components (as in
Figure 17 situation).
The above hypothesis of a multi-component structure of the curves derived from the dx and qx data in the demographic Life Tables was studied in more detail by the author in the work presented in Ref.
.
8. Final Comments and Future Analysis Development
General aspects of the model
The present work considers and mathematically describes an abstract object (the Arbitrary Oscillator - ArbO) that generates events that evolve over time and may also include instances of "death." To find the most probable distribution of such events, the Fermi statistic is used, obtaining a recursive computational algorithm that provides the solution as dependent on a single parameter TC. By means of scaling, the mortality predictions generated by the algorithm can be compared with actual demographic mortality data. This comparison shows similarities of the theoretical curves generated by the algorithm with the shape of the real data.
Specific aspects of the model
The mortality pattern that emerged from the study does not appear to have an absolute fixed limit to life span; this limit is set if the TC parameter is fixed but, as the TC parameter changes, the limit of maximum life expectancy shifts with logarithmic law (
Figure 16)
Increasing the TC parameter lengthens the life span limit accordingly, but the critical phase narrows in percentage terms. This is due to the constant time distance (~14 years) expected between peak mortality and the end point of lifespan: for example, if the peak is at 80 years, one can hope to still live to 94 years (17% more); if the peak is at 100 years, one can hope to still live to 114 years (14% more).
Flattening of the curve in the probability of death in old age is a matter of debate as to whether it is real or an effect of measurement methods and the sparsity and incompleteness of data (Ref.
| [11] | L. A. Gavrilov and N. S. Gavrilova, “Mortality Measurement at Advanced Ages: a Study of the Social Security Administration Death Master File”, North American Actuarial Journal. 15(3): 432-447. https://doi.org/10.1080/10920277.2011.10597629 |
[11]
). In the case of the proposed model, flattening is confirmed theoretically by the evolution of both recursive data and continuous simulation curves (
Figure 15).
Actual mortality curves may be due to a mixture of components (sub-groups of subjects) with different TCs, these mixing of groups can explain the real curves of both mortality and mortality probability (
Figures 17, 19, 21).
If we look also to the evolution of the ISTAT curves, from past to today (
Figure 9), we see that the spread in the mortality peak is sharpening becoming closest to a
m(t
) curve. The model would seem indeed to predict that in more advanced social situations the possible different TCs groups tend to approach a single final TC value, while more backward social situations may be explained by a broader distribution of TCs groups.
Open points and possible future developments
When we associate an abstract object such as ArbO with real living subjects we can assume direct analogies with e.g. mortality events but less clear, in the analogy, is the meaning of the TC parameter and also of the v and Q variables. For the latter two variables we can give interpretations such as: number of “critical” decisions/events in a time interval (Q variable) and number of “safe-ending” decisions/events in the same interval (v variable).
For the TC parameter (“Total Cases”), there is no clear reference to the reality addressed in the analogy. In the ArbO model, the TC parameter limits the maximum number of paths in the “path space.” Limit reached which, all possible events must lead to the end of the life cycle. In sociological terms, our TC could represent a generic social target parameter such as ‘longevity’, whose improvement corresponds to a similar improvement in life expectancy.
An example of such ‘social’ parameter can be the ‘Intrinsic Capacity’(IC), a concept introduced by the World Health Organization, in the context of the healthy ageing target. The IC parameter refers to an individual’s overall functional reserve, that is, the set of his physical and mental capacities that determine his well-being and autonomy as he ages and this concept can be associated to our abstract TC parameter.
A further possible extension of our model could be considered in the case of growth by splitting of biological or physical objects. In this case the model can be adapted to that of a system growing by splitting of components in the presence of a random risk of death and of an absolute limit on the number of elements to be met (TC). On the latter aspect, i.e., population growth and the patterns that can govern it, references such as Ref.
| [12] | Nicolas Bacaer, Histoires des mathematiques et de populations ed. Cassini, Paris, 2008. |
| [13] | S. Méléard, “Modèles aléatoires en Ecologie et Evolution”, Springer. |
| [14] | K., Henderson| M. Loreau “An ecological theory of changing human population dynamics”, People and Nature. 2019; 1: 31-43. https://doi.org/10.1002/pan3.8 |
| [15] | M. Ausloos, “Gompertz and Verhulst frameworks for growth AND decay description”, arXiv: 1109.1269v2. |
[12-15]
can be considered and studied in correlation with future developments of our model.