Technical, biological and other systems

💖 Do you like it? Share the link with your friends

The systems models we have talked about so far have been deterministic (certain), i.e. specifying the input influence uniquely determined the output of the system. However, in practice this rarely happens: the description of real systems is usually inherent in uncertainty. For example, for a static model, uncertainty can be taken into account by writing the relation (2.1)

where is the error normalized to the system output.

The reasons for uncertainty are varied:

– errors and interference in measurements of system inputs and outputs (natural errors);

– inaccuracy of the system model itself, which forces an error to be artificially introduced into the model;

– incomplete information about system parameters, etc.

Among the various methods of clarifying and formalizing uncertainty, the most widespread is the chaotic (probabilistic) approach, in which uncertain quantities are considered random. The developed conceptual and computational apparatus of probability theory and mathematical statistics allows us to give specific recommendations on choosing the structure of the system and estimating its parameters. The classification of stochastic models of systems and methods for their study is presented in Table. 1.4. Conclusions and recommendations are based on the averaging effect: random deviations of the measurement results of a certain quantity from its expected value cancel each other out when summed, and the arithmetic mean of a large number of measurements turns out to be close to the expected value. Mathematical formulations of this effect are given by the law of large numbers and the central limit theorem. The law of large numbers states that if are random variables with mathematical expectation (mean value) and variance, then



at sufficiently large N. This indicates the fundamental possibility of making an arbitrarily accurate assessment based on measurements. The central limit theorem, clarifying (2.32), states that

where is a standard normally distributed random variable

Since the distribution of the quantity is well known and tabulated (for example, it is known that the relation (2.33) allows one to calculate the estimation error. Let, for example, you want to find at what number of measurements the error in estimating their mathematical expectation with a probability of 0.95 will be less than 0.01 , if the variance of each measurement is 0.25. From (2.33) we obtain that the inequality from which must hold. N> 10000.

Of course, formulations (2.32), (2.33) can be given a more rigorous form, and this can easily be done using the concepts of probabilistic convergence. Difficulties arise when trying to test the conditions of these strict statements. For example, the law of large numbers and the central limit theorem require the independence of individual measurements (realizations) of a random variable and the finiteness of its variance. If these conditions are violated, then the conclusions may also be violated. For example, if all measurements coincide: then, although all other conditions are met, there can be no question of averaging. Another example: the law of large numbers is unfair if random variables are distributed according to Cauchy’s law (with a distribution density that does not have finite mathematical expectation and dispersion. But such a law occurs in life! For example, according to Cauchy, the integral illumination of points on a straight bank by a uniformly rotating spotlight located at sea (on a ship) and turning on at random times.

But even greater difficulties arise in checking the validity of the very use of the term “random”. What is a random variable, random event, etc. It is often said that an event A by chance, if as a result of the experiment it can occur (with probability R) or not occur (with probability 1- R). Everything, however, is not so simple. The very concept of probability can be associated with the results of experiments only through the frequency of its occurrence in a certain number (series) of experiments: , where N A- the number of experiments in which the event occurred, N- total number; experiments. If the numbers are large enough N approach some constant number r A:

that event A can be called random, and the number R- its probability. In this case, the frequencies observed in different series of experiments should be close to each other (this property is called statistical stability or homogeneity). The above also applies to the concept of a random variable, since a value is random if events are random (and<£<Ь} для любых чисел A,b. The frequencies of occurrence of such events in long series of experiments should group around certain constant values.

So, for the stochastic approach to be applicable, the following requirements must be met:

1) the massive scale of the experiments being carried out, i.e. quite a large number;

2) repeatability of experimental conditions, justifying comparison of the results of different experiments;

3) statistical stability.

The stochastic approach obviously cannot be applied to single experiments: expressions like “the probability that it will rain tomorrow”, “with a probability of 0.8, Zenit will win the cup”, etc. are meaningless. But even if the experiments are widespread and repeatable, there may not be statistical stability, and checking this is not an easy task. Known estimates of the permissible deviation of frequency from probability are based on the central limit theorem or Chebyshev's inequality and require additional hypotheses about the independence or weak dependence of measurements. Experimental verification of the independence condition is even more difficult, since it requires additional experiments.

The methodology and practical recipes for applying probability theory are presented in more detail in the instructive book by V.N. Tutubalin, an idea of ​​which is given by the quotes below:

“It is extremely important to eradicate the misconception that sometimes occurs among engineers and natural scientists who are not sufficiently familiar with the theory of probability, that the result of any experiment can be considered as a random variable. In especially severe cases, this is accompanied by belief in the normal distribution law, and if the random variables themselves are not normal, then they believe that their logarithms are normal.”

“According to modern concepts, the scope of application of probability-theoretic methods is limited to phenomena that are characterized by statistical stability. However, testing statistical stability is difficult and always incomplete, and it often gives a negative conclusion. As a result, in entire fields of knowledge, for example, in geology, an approach has become the norm in which statistical stability is not checked at all, which inevitably leads to serious errors. In addition, the propaganda of cybernetics undertaken by our leading scientists has given (in some cases!) a somewhat unexpected result: it is now believed that only a machine (and not a person) is capable of obtaining objective scientific results.

In such circumstances, it is the duty of every teacher to again and again propagate that old truth that Peter I tried (unsuccessfully) to instill in Russian merchants: that one must trade honestly, without deception, since in the end it is more profitable for oneself.”

How to build a model of a system if there is uncertainty in the problem, but the stochastic approach is not applicable? Below we briefly outline one of the alternative approaches, based on fuzzy set theory.


We remind you that a relation (the relationship between and) is a subset of a set. those. some set of pairs R=(( x, at)), Where,. For example, a functional connection (dependence) can be represented as a relationship between sets, including pairs ( X, at), for which.

In the simplest case it may be, a R is an identity relation if.

Examples 12-15 in table. 1. 1 were invented in 1988 by a student of the 86th grade of school 292 M. Koroteev.

The mathematician here, of course, will notice that the minimum in (1.4), strictly speaking, may not be achieved and in the formulation of (1.4) it is necessary to replace rnin with inf (“infimum” is the exact infimum of the set). However, this will not change the situation: formalization in this case does not reflect the essence of the task, i.e. carried out incorrectly. In the future, in order not to “scare” the engineer, we will use the notation min, max; keeping in mind that, if necessary, they should be replaced by the more general inf, sup.

Here the term “structure” is used in a somewhat narrower sense, as in subsection. 1.1, and means the composition of subsystems in the system and types of connections between them.

A graph is a pair ( G, R), where G=(g 1 ... g n) is a finite set of vertices, a - binary relation to G. If, then and only if, then the graph is called undirected, otherwise - directed. The pairs are called arcs (edges), and the elements of the set G- the vertices of the graph.

That is, algebraic or transcendental.

Strictly speaking, a countable set is a certain idealization that cannot be realized practically due to the finite size of technical systems and the limits of human perception. Such idealized models (for example, the set of natural numbers N=(1, 2,...)) makes sense to introduce for sets that are finite, but with a preliminarily unlimited (or unknown) number of elements.

Formally, the concept of an operation is a special case of the concept of a relationship between elements of sets. For example, the operation of adding two numbers specifies a ternary relation R: three of numbers (x, y, z) z) belongs to the relation R(we write (x,y,z)), if z = x+y.

Complex number, argument of polynomials A(), IN().

This assumption is often met in practice.

If the quantity is unknown, then it should be replaced in (2.33) with the estimate where In this case, the quantity will no longer be distributed normally, but according to Student’s law, which at is practically indistinguishable from normal.

It is easy to see that (2.34) is a special case of (2.32), when we take if the event A came in j- m experiment, otherwise. Wherein

And today you can add “... and computer science” (author’s note).

Any real process characteristic random fluctuations caused by the physical variability of any factors over time. In addition, there may be random external influences on the system. Therefore, with an equal average value of the input parameters at different times the output parameters will be different. Therefore, if random impacts on the system under study are significant, it is necessary to develop probabilistic (stochastic) model of the object, taking into account the statistical laws of distribution of system parameters and choosing the appropriate mathematical apparatus.

When building deterministic models random factors are neglected, taking into account only the specific conditions of the problem being solved, the properties and internal connections of the object (almost all branches of classical physics are built on this principle)

The idea of ​​deterministic methods- in the use of the model’s own dynamics during the evolution of the system.

In our course these methods are presented: molecular dynamics method, the advantages of which are: accuracy and certainty of the numerical algorithm; The disadvantage is that it is labor intensive due to the calculation of the interaction forces between particles (for a system of N particles, at each step you need to perform
operations of counting these forces).

At deterministic approach equations of motion are specified and integrated over time. We will consider systems of many particles. The positions of the particles contribute potential energy to the total energy of the system, and their velocities determine the contribution of kinetic energy. The system moves along a trajectory with constant energy in phase space (further explanations will follow). For deterministic methods, a microcanonical ensemble is natural, the energy of which is the integral of motion. In addition, it is possible to study systems for which the integral of motion is temperature and (or) pressure. In this case, the system is not closed, and it can be represented in contact with a thermal reservoir (canonical ensemble). To model it, we can use an approach in which we limit a number of degrees of freedom of the system (for example, we set the condition
).

As we have already noted, in the case when processes in a system occur unpredictably, such events and quantities associated with them are called random, and algorithms for modeling processes in the system - probabilistic (stochastic). Greek stoohastikos- literally means “one who can guess.”

Stochastic methods use a slightly different approach than deterministic ones: they only need to calculate the configuration part of the problem. The equations for the momentum of a system can always be integrated. The problem that then arises is how to conduct transitions from one configuration to another, which in the deterministic approach are determined by momentum. Such transitions in stochastic methods are carried out with probabilistic evolution in Markov process. The Markov process is a probabilistic analogue of the model’s own dynamics.

This approach has the advantage that it allows one to model systems that do not have any inherent dynamics.

Unlike deterministic methods, stochastic methods on a PC are simpler and faster to implement, but to obtain values ​​close to the true ones, good statistics are required, which requires modeling a large ensemble of particles.

An example of a completely stochastic method is Monte Carlo method. Stochastic methods use the important concept of a Markov process (Markov chain). The Markov process is a probabilistic analogue of the process in classical mechanics. The Markov chain is characterized by the absence of memory, i.e. the statistical characteristics of the near future are determined only by the present, without taking into account the past.

More practical than busy 2.

Random walk model

Example(formal)

Let us assume that particles are placed in arbitrary positions at the nodes of a two-dimensional lattice. At each time step, the particle “jumps” to one of the idle positions. This means that the particle has the ability to choose the direction of its jump to any of the four nearest places. After a jump, the particle “does not remember” where it jumped from. This case corresponds to a random walk and is a Markov chain. The result at each step is a new state of the particle system. The transition from one state to another depends only on the previous state, i.e., the probability of the system being in state i depends only on state i-1.

What physical processes in a solid body remind us (similar to) the described formal model of a random walk?

Of course, diffusion, that is, the very processes, the mechanisms of which we considered in the course of heat and mass transfer (3rd course). As an example, let us recall the usual classical self-diffusion in a crystal, when, without changing their visible properties, atoms periodically change places of temporary residence and wander around the lattice, using the so-called “vacancy” mechanism. It is also one of the most important mechanisms of diffusion in alloys. The phenomenon of migration of atoms in solids plays a decisive role in many traditional and non-traditional technologies - metallurgy, metalworking, the creation of semiconductors and superconductors, protective coatings and thin films.

It was discovered by Robert Austen in 1896 by observing the diffusion of gold and lead. Diffusion- the process of redistribution of atomic concentrations in space through chaotic (thermal) migration. Causes, from the point of view of thermodynamics, there can be two: entropy (always) and energy (sometimes). The entropic reason is the increase in chaos when mixing atoms of the carved variety. Energy - promotes the formation of an alloy, when it is more advantageous to have atoms of different types nearby, and promotes diffusion decomposition, when the energy gain is ensured by placing atoms of the same type together.

The most common diffusion mechanisms are:

    vacancy

    internodal

    displacement mechanism

To implement the vacancy mechanism, at least one vacancy is required. Migration of vacancies is carried out by moving to an unoccupied site of one of the neighboring atoms. An atom can make a diffusion jump if there is a vacancy next to it. Vacancy cm, with a period of thermal vibrations of an atom in a lattice site, at a temperature T = 1330 K (by 6 K< точки плавления), число скачков, которое совершает вакансия в 1с, путь за одну секунду-см=3 м (=10 км/ч). По прямой же путь, проходимый вакансиейсм, т. е. в 300 раз короче пути по ломаной.

Nature needed it. so that the vacancy changes its place of residence within 1 s, passes along a broken line 3 m, and moves along a straight line by only 10 microns. Atoms behave calmer than vacancies. But they also change their place of residence a million times per second and move at a speed of approximately 1 m/hour.

So. that one vacancy per several thousand atoms is enough to move atoms at a micro level at a temperature close to melting.

Let us now form a random walk model for the phenomenon of diffusion in a crystal. The process of wandering of an atom is chaotic and unpredictable. However, for an ensemble of wandering atoms, statistical regularities should appear. We will consider uncorrelated jumps.

This means that if
And
is the movement of atoms during the i and j jumps, then after averaging over the ensemble of wandering atoms:

(average product = product of averages. If the walk is completely random, all directions are equal and
=0.)

Let each particle of the ensemble make N elementary jumps. Then its total displacement is:

;

and the average square of displacement

Since there is no correlation, the second term =0.

Let each jump have the same length h and random direction, and the average number of jumps per unit time is v. Then

It's obvious that

Let's call the quantity
- diffusion coefficient of wandering atoms. Then
;

For the three-dimensional case -
.

We got parabolic diffusion law- the mean square of the displacement is proportional to the wandering time.

This is exactly the problem we have to solve in the next laboratory work - modeling one-dimensional random walks.

Numerical model.

We define an ensemble of M particles, each of which takes N steps, independently of each other, to the right or to the left with the same probability. Step length = h.

For each particle we calculate the square of the displacement
in N steps. Then we perform averaging over the ensemble -
. Magnitude
, If
, i.e. The mean square of the displacement is proportional to the random walk time
- average time of one step) - parabolic law of diffusion.

Page
6

Solution development method. Some solutions, usually typical and repetitive, can be successfully formalized, i.e. accepted according to a predetermined algorithm. In other words, a formalized decision is the result of performing a predetermined sequence of actions. For example, when drawing up a schedule for repair maintenance of equipment, the shop manager may proceed from a standard that requires a certain ratio between the amount of equipment and maintenance personnel. If there are 50 units of equipment in a workshop, and the maintenance standard is 10 units per repair worker, then the workshop must have five repair workers. Similarly, when a financial manager decides to invest surplus funds in government securities, he chooses between different types of bonds depending on which of them provides the highest return on investment at a given time. The choice is made on the basis of a simple calculation of the final profitability for each option and determining the most profitable one.

Formalization of decision-making increases management efficiency by reducing the likelihood of error and saving time: there is no need to re-develop a solution every time a corresponding situation arises. Therefore, the management of organizations often formalizes solutions for certain, regularly recurring situations, developing appropriate rules, instructions and standards.

At the same time, in the process of managing organizations, new, atypical situations and non-standard problems are often encountered that cannot be resolved formally. In such cases, intellectual abilities, talent and personal initiative of managers play a big role.

Of course, in practice, most decisions occupy an intermediate position between these two extreme points, allowing both the manifestation of personal initiative and the use of a formal procedure in the process of their development. The specific methods used in the decision-making process are discussed below.

· Number of selection criteria.

If the choice of the best alternative is made according to only one criterion (which is typical for formalized decisions), then the decision made will be simple, single-criteria. Conversely, when the chosen alternative must simultaneously satisfy several criteria, the decision will be complex and multi-criteria. In management practice, the vast majority of decisions are multi-criteria, since they must simultaneously meet such criteria as: profit volume, profitability, quality level, market share, employment level, implementation period, etc.

· Decision form.

The person making the choice from the available alternatives for the final decision can be one person and his decision will accordingly be sole. However, in modern management practice, complex situations and problems are increasingly encountered, the solution of which requires a comprehensive, integrated analysis, i.e. participation of a group of managers and specialists. Such group, or collective, decisions are called collegial. Increased professionalization and deepening specialization of management lead to the widespread spread of collegial forms of decision-making. It is also necessary to keep in mind that certain decisions are legally classified as collegial. For example, certain decisions in a joint stock company (on the payment of dividends, distribution of profits and losses, major transactions, election of governing bodies, reorganization, etc.) fall under the exclusive competence of the general meeting of shareholders. The collegial form of decision-making, of course, reduces the efficiency of management and “erodes” responsibility for its results, but it prevents gross errors and abuses and increases the validity of the choice.

· Method of fixing the solution.

On this basis, management decisions can be divided into fixed, or documentary (i.e., drawn up in the form of some kind of document - an order, instruction, letter, etc.), and undocumented (not having a documentary form, oral). Most decisions in the management apparatus are documented, but small, insignificant decisions, as well as decisions made in emergency, acute, and urgent situations, may not be documented.

· Nature of information used. Depending on the degree of completeness and reliability of the information available to the manager, management decisions can be deterministic (made under conditions of certainty) or probabilistic (adopted under conditions of risk or uncertainty). These conditions play an extremely important role in decision making, so let's look at them in more detail.

Deterministic and probabilistic decisions.

Deterministic solutions are accepted under conditions of certainty, when the manager has almost complete and reliable information regarding the problem being solved, which allows him to know exactly the result of each of the alternative choices. There is only one such result, and the probability of its occurrence is close to one. An example of a deterministic decision would be the choice of 20% federal loan bonds with a constant coupon income as an investment tool for free cash. In this case, the financial manager knows for sure that, with the exception of extremely unlikely emergency circumstances due to which the Russian government will not be able to fulfill its obligations, the organization will receive exactly 20% per annum on the invested funds. Similarly, when deciding to launch a particular product into production, a manager can accurately determine the level of production costs, since rental rates, materials and labor costs can be calculated quite accurately.

Analysis of management decisions under conditions of certainty is the simplest case: the number of possible situations (options) and their outcomes are known. You need to choose one of the possible options. The degree of complexity of the selection procedure in this case is determined only by the number of alternative options. Let's consider two possible situations:

a) There are two possible options;

In this case, the analyst must choose (or recommend choosing) one of two possible options. The sequence of actions here is as follows:

· the criterion by which the choice will be made is determined;

· the “direct counting” method calculates the criterion values ​​for the compared options;

Various methods for solving this problem are possible. Typically they are divided into two groups:

methods based on discounted valuations;

methods based on accounting estimates.

Probabilistic-deterministic mathematical predictive models of energy load graphs are a combination of statistical and deterministic models. It is these models that make it possible to ensure the best forecasting accuracy and adaptability to the changing process of power consumption.

They are based on standardized modeling concepts loads, i.e. additive decomposition of the actual load on a standardized graph (basic component, deterministic trend) and the residual component :

Where t– time within the day; d– number of day, for example, in a year.

In the standard component during modeling, they also carry out additive selection of individual components that take into account: changes in the average seasonal load ; weekly cycle of power consumption changes ; a trend component that models additional effects associated with changes in the time of sunrise and sunset from season to season ; component that takes into account the dependence of power consumption on meteorological factors , in particular temperatures, etc.

Let us consider in more detail approaches to modeling individual components based on the deterministic and statistical models mentioned above.

Modeling average seasonal load often done using simple moving averaging:

where N is the number of ordinary regular (working days) contained in n past weeks. , since “special”, “irregular days”, holidays, etc. are excluded from the weeks. Daily updates are carried out by averaging data over the past n weeks.

Simulation of weekly cycles also carried out by moving averaging of the form

updated weekly by averaging data over the past n weeks, or using an exponentially weighted moving average:

where is an empirically determined smoothing parameter ( ).

In work for modeling And seven components are used , for each day of the week, and each determined separately using an exponential smoothing model.

Authors of the work for modeling Double exponential smoothing of the Holt–Winters type is used. In work for modeling use a harmonic representation of the form

with parameters estimated from empirical data (the value “52” determines the number of weeks in a year). However, the problem of adaptive operational estimation of these parameters in this work is not completely solved.

Modeling , in some cases, carried out using finite Fourier series: with a weekly period, with a daily period, or with separate modeling of working days and weekends, respectively, with periods of five and two days:

To model the trend component use either polynomials of 2nd - 4th orders, or various nonlinear empirical functions, for example, of the form:

where is a fourth-degree polynomial describing relatively slow smoothed load changes during the daytime according to the seasons; , , – functions modeling effects associated with changes in the time of sunrise and sunset by season.

To take into account the dependence of power consumption on meteorological factors, in some cases an additional component is introduced . The work theoretically substantiates the inclusion into the model, but the possibilities of modeling the temperature effect are considered only to a limited extent. Thus, to represent the temperature component for Egyptian conditions, a polynomial model is used

where is the air temperature at the t-th hour.

A regression method is used to “normalize” the peaks and troughs of the process taking into account temperature, with the normalized data represented by a one-dimensional autoregressive integrated moving average (ARISS) model.

Also used for modeling taking into account temperature, a recursive Kalman filter, which includes external factors - temperature forecast. Or they use polynomial cubic interpolation of hourly loads in the short-term range and take into account the influence of temperature in the model.

To take into account average daily temperature forecasts, various weather conditions for the implementation of the analyzed process and at the same time increase the stability of the model, it is proposed to use a special modification of the moving average model

,

where for various weather conditions associated with probabilities a series of m load graphs is formed , and the forecast is defined as the conditional mathematical expectation. The probabilities are updated using the Bayes method as new actual load values ​​and factors become available during the day.

Modeling residual component carried out both using one-dimensional models and multidimensional ones, taking into account meteorological and other external factors. Thus, an autoregressive model AR(k) of order k is often used as a one-dimensional (one-factor) model

,

where is the residual white noise. To predict hourly (half-hourly) readings, the AR(1), AR(2) and even AR(24) models are used. Even if the generalized ARISS model is used for anyway, its application comes down to models AR(1), AR(2) for both five-minute and hourly load measurements.

Another one-factor model for modeling the component is the model single or double exponential smoothing. This model allows you to effectively identify short-term trends as the residual load changes. Simplicity, economy, recursiveness and computational efficiency make the exponential smoothing method widely used. Using simple exponential smoothing at different constants and determine two exponential averages And . Forecast of the residual component proactively determined by the formula

Prev Next

Functional departmentalization

Functional departmentalization is the process of dividing an organization into separate units, each of which has clearly defined functions and responsibilities. It is more typical for low-product areas of activity: for...

Effective control

Control must be timely and flexible, focused on solving the tasks set by the organization and corresponding to them. Continuity of control can be ensured by a specially developed system for monitoring the progress of implementation...

Factors contributing to the development of effective strategic management decisions.

Analysis of the organization's immediate environment involves, first of all, an analysis of factors such as customers, suppliers, competitors, and the labor market. When analyzing the internal environment, the main attention is paid to personnel...

Processing of examination data

Developing scenarios for possible developments of the situation requires appropriate data processing, including mathematical processing. In particular, mandatory processing of data received from experts is required during collective examination, when...

External public relations

Traditional project management has long been based on the classic input-process-output model with feedback to control output. Dynamic leaders have also discovered that opening lines of communication in both directions creates a powerful...

Innovation strategy

The high level of competition in the vast majority of modern sales markets increases the intensity of competition, in which the one who can offer the consumer more advanced products, additional...

Differences between professed and deep-seated interests

The main motive leading to the creation of an organization is often considered to be profit. However, is this the only interest? In some cases, no less important for the head of an organization are certain...

Generalized Linear Test Method

One of the widely used methods for comparative assessment of multi-criteria objects for making management decisions in management practice is the method of generalized linear criteria. This method involves determining the weight...

Expert curves

Expert curves reflect the assessment of the dynamics of the predicted values ​​of indicators and parameters by experts. By forming expert curves, experts determine critical points at which the trend of changes in the values ​​of predicted indicators and...

Management process support

When a manager managing a department of an organization or an organization as a whole is faced with a barrage of problems that require timely and effective decisions, the situation becomes difficult. The manager must...

Interaction matrix method

The method of mutual influence matrices, developed by Gordon and Helmer, involves determining, on the basis of expert assessments, the potential mutual influence of events in the population under consideration. Estimates relating all possible combinations of events according to...

Development of scenarios for possible development of the situation

The development of scenarios begins with a meaningful description and definition of a list of the most likely scenarios for the development of the situation. To solve this problem, the brainstorming method can be used...

Network organization

Increasing instability of the external environment and fierce competition in sales markets, the need for a fairly rapid change (on average 5 years) of generations of manufactured products, the information and computer revolution, which had a significant impact...

Effective leader

An effective leader must demonstrate competence in the ability to solve emerging problems of a strategic and tactical nature, in planning, financial management and control, interpersonal communication, professional development and...

Resource support

Resource provision plays a special role in determining both the goals facing the organization and the tasks and tasks for achieving the goals. At the same time, when forming a strategy and...

Structure of the personnel management system

Delegation of a greater amount of authority also implies a greater amount of responsibility for each employee in his or her workplace. In such conditions, more and more importance is attached to systems of stimulation and motivation of activities...

The Art of Decision Making

At the final stage, the art of decision-making becomes crucial. However, we should not forget that an outstanding artist creates his works based on a brilliantly honed and perfect technique....

Multicriteria assessments, requirements for criteria systems

When developing management decisions, it is important to correctly assess the broken situation and alternative solutions in order to select the most effective solution that meets the goals of the organization and the decision-maker. Correct assessment...

Decisions under conditions of uncertainty and risk

Since, as mentioned above, the decision-making process is always associated with one or another assumption of the manager about the expected development of events and the decision made is aimed at the future, it...

The general rules according to which comparison of objects of examination can be carried out characterize...

An alternative option (object) a is non-dominated if there is no alternative option o that is superior (not inferior) to a. for all components (particular criteria). Naturally, the most preferable among those compared...

Fayol's ideas of organization management

A significant breakthrough in management science is associated with the work of Henri Fayol (1841 -1925). For 30 years, Fayol headed a large French metallurgical and mining company. He accepted...

The principle of taking into account and coordinating external and internal factors of the organization’s development

The development of an organization is determined by both external and internal factors. Strategic decisions made based on taking into account the influence of only external or only internal factors will inevitably suffer from insufficient...

The emergence of management decision science and its relationship with other management sciences

The development of management decisions is an important process that connects the main functions of management: planning, organization, motivation, control. It is the decisions made by the leaders of any organization that determine not only the effectiveness of its activities, but...

Formation of a list of criteria characterizing the object of making a management decision

The list of criteria characterizing the comparative preference of objects for making management decisions must satisfy a number of natural requirements. As mentioned above, the very concept of criterion is closely related to...

The main rule of delegation of authority

We want to emphasize an important rule that must be observed when delegating authority. Delegated powers, as well as the tasks assigned to the employee, must be clearly defined and unambiguous...

The main purpose of the script is to provide a key to understanding the problem.

When analyzing a specific situation, the variables that characterize it take on the corresponding values ​​- certain gradations of verbal-numerical scales, each of the variables. All values ​​of pairwise interactions between...

Stage of operational management of the implementation of adopted decisions and plans

After the stage of transferring information about the decisions made and their approval, the stage of operational management of the implementation of the decisions and plans begins. At this stage, the progress is monitored...

Classification of main forecasting methods

Technological forecasting is divided into exploratory (sometimes also called search) and normative. The basis of exploratory forecasting is an orientation towards emerging opportunities, establishing trends in the development of situations...

Construction of a dam for a reservoir

Several years ago, a well-known construction company sought to provide the necessary facilities for the Main Retention Dam project in Bihar, India. At that...

Of course, every businessman, when planning production, strives to ensure that it is profitable and makes a profit. If the share of costs is relatively large, then we can talk about the organization’s profitable activities...

  • Decision making by decision maker

    The results of examinations on the comparative assessment of alternative solutions or a single solution, if the development of alternative options was not envisaged, are sent to the decision maker. They serve as the main basis for the adoption...

  • Development of an assessment system

    In the process of developing a management decision, an adequate assessment of the situation and its various aspects are of great importance, which must be taken into account when making decisions that lead to success. For an adequate assessment...

  • Determination of salary and benefits

    The productive work of personnel at an enterprise largely depends on the policy of motivation and stimulation of employees pursued by the management of the enterprise. The formation of the wage structure is of great importance...

  • Strategic planning and purposeful activities of the organization

    The implementation of the management functions of the organization is carried out to a large extent using strategic and tactical planning, specially developed programs and projects and clearly monitored progress of their implementation. Strategic…

  • Control is divided into preliminary, current and final.

    Preliminary control is carried out before the start of work. At this stage, rules, procedures and behavior are monitored to ensure that the work is moving in the right direction. At this stage, we control...

  • The goals of the organization are realized in the external environment.

    When analyzing the state of the external environment and the expected dynamics of changes, economic, technological, competitive, market, social, political, and international factors are usually considered. When analyzing the external environment, pay attention...

  • Prev Next



    tell friends