The process of calculating the remaining time. How to learn how to correctly determine the timing of the work - experts answer. Technical Director of the Center for Innovative Technologies and Solutions, Jet Infosystems

The switching version of the previous algorithm is the Least Remaining Runtime algorithm. According to this algorithm, the scheduler each time selects the process with the smallest remaining execution time. In this case, it is also necessary to know in advance the time to complete the tasks. When a new task arrives, its total execution time is compared with the remaining execution time of the current task. If the execution time of the new task is less, the current process is suspended and control is transferred to the new task. This scheme allows you to quickly serve short requests.

Three-level planning

Batch processing systems allow you to implement three-level planning, as shown in the figure. As new tasks enter the system, they are first placed in a queue stored on disk. Inlet access scheduler selects a job and sends it to the system. The rest of the jobs remain in the queue.

As soon as a job has entered the system, a corresponding process will be created for it, and it can immediately enter the struggle for access to the processor. Nevertheless, it is possible that there are too many processes and they all do not fit in memory, then some of them will be paged out to disk. The second level of scheduling determines which processes can be kept in memory and which can be kept on disk. Does it memory scheduler .

The memory scheduler periodically looks at the processes that are on disk to decide which one to move into memory. Among the criteria used by the scheduler are the following:

1. How long has it been since the process was paged to disk or loaded from disk?

2. How long has the process been using the CPU?

3. What is the size of the process (small processes don't get in the way)?

4. What is the importance of the process?

The third level of scheduling is responsible for the access of processes in the ready state to the processor. When we talk about "scheduler", we usually mean exactly cpu scheduler . This scheduler uses whatever algorithm suits the situation, with or without interruption. We have already considered some of these algorithms, and we will get acquainted with others.

Planning in interactive systems.

Cyclic planning.

One of the oldest, simplest, fairest and most commonly used is the cyclic scheduling algorithm. Each process is given a certain interval of processor time, the so-called time slice. If the process is still running at the end of the time quantum, it is terminated and control is transferred to another process. Of course, if the process blocks or terminates early, the transition occurs at that point. The implementation of round robin scheduling is simple. The scheduler only needs to keep the list of processes in a ready state. When a process has reached its time limit, it is sent to the bottom of the list.

The only interesting point of this algorithm is the quantum length. Switching from one process to another takes some time - you need to save and load registers and memory maps, update tables and lists, save and reload the memory cache, etc. The conclusion can be formulated as follows: too small quantum will lead to frequent switching of processes and small efficiency, but too large a quantum can lead to slow response to short interactive requests. A quantum value of about 20 -50 ms is often a reasonable compromise.

Priority planning.

There is an important assumption in the round robin scheduling algorithm that all processes are equivalent. In a computer situation with a large number of users, this may not be the case. For example, in a university, first of all, deans should be served, then professors, secretaries, cleaners, and only then students. The need to take into account such external factors leads to priority planning. The basic idea is simple: each process is assigned a priority, and control is transferred to the highest priority process that is ready to run.

Multiple queues.

One of the first priority schedulers was implemented in the compatible time-shared system (CTSS). The main problem with the CTSS system was that process switching was too slow, since only one process could be in the memory of the IBM 7094 computer. Each switch meant swapping the current process to disk.

and reading a new process from disk. The developers of CTSS quickly realized that the efficiency would be higher if processor-bound processes were given a larger time slice than if they were given small time slices, but often. On the one hand, this will reduce the number of swaps from memory to disk, and on the other hand, it will lead to a worse response time, as we have already seen.

As a result, a solution with priority classes was developed. The processes of the class with the highest priority were given one quantum, the processes of the next class - two quantums, the next - four quantums, and so on. When the process used all the time allotted to it, it moved to the class below.

As an example, consider a process that needs to perform calculations for 100 quants. First, it will be given one quantum, then it will be pumped to disk. The next time he gets 2 quanta, then 4, 8.16, 32, 64, although out of 64 he only uses 37. In this case, only 7 pumps (including the initial load) will be needed instead of 100, which would be needed if using a round robin algorithm. In addition, as you dive into the priority queue, the process will run less and less often, leaving the processor to shorter processes.

“The shortest process is the next one”

Since the Shortest Task First algorithm minimizes the average turnaround time in batch processing systems, one would like to use it in interactive systems as well. To a certain extent this is possible. Interactive processes most often follow the “wait for command, execute command, wait for command, execute command...” pattern. Considering the execution of each command as a separate task, you can minimize the overall average response time by running the shortest task first. The only problem is

is to figure out which of the waiting processes is the shortest.

One method relies on an estimate of the length of the process based on the previous behavior of the process. This starts the process with the shortest estimated time. Assume that the estimated time of execution of the command is T 0 and the estimated time of the next run is T 1 . You can improve the time estimate by taking the weighted sum of these times aT 0 + (1 - a)T 1 . By choosing an appropriate value of a, we can make the estimation algorithm quickly forget about previous runs or, conversely, remember them for a long time. Taking a = 1/2, we get a series of estimates:

T 0, T 0/2 + T 1/2, T 0/4 + T 1/4 + T 2/2, T 0/8 + T 1/8 + T 2/4 + T 3/2.

After three runs, the weight of T 0 in the estimate will decrease to 1/8.

The method of estimating the next value in a series through the weighted average of the previous value and the previous estimate is often referred to as aging. This method is applicable in many situations where it is necessary to estimate from previous values. The easiest way to implement aging is when a = 1/2. At every step, all you need is

add a new value to the current estimate and divide the sum in half (shifting to the right by 1 bit).

Guaranteed planning.

A fundamentally different approach to planning is to give real promises to users and then fulfill them. Here's one promise that's easy to say and easy to keep: If n users share the CPU with you, you'll be given 1/n of CPU power.

And in a system with one user and n processors running, each gets 1/n processor cycles.

To fulfill this promise, the system must keep track of the allocation of CPU between processes from the moment each process is created. The system then calculates the amount of CPU resources the process is entitled to, such as the time since creation divided by n. Now we can calculate the ratio of the time granted to the process to the time to which it is entitled. The resulting value of 0.5 means that the process was allocated only half of what it was supposed to, and 2.0 means that the process got twice as much as it was supposed to. Then the process with the lowest ratio is started until

it will not become larger than that of its nearest neighbor.

lottery planning.

The algorithm is based on the distribution of lottery tickets to processes for access to various resources, including the processor. When the planner needs to make a decision, is chosen randomly lottery ticket, and its owner gets access to the resource. In terms of CPU access, the "lottery" can happen 50 times per second, and the winner gets 20ms of CPU time.

More important processes can be given additional tickets to increase the chance of winning. If there are only 100 tickets and 20 of them are held by one process, then it will get 20% of the processor time. Unlike priority scheduling, where it is very difficult to estimate what, say, priority 40 means, everything is clear in lottery scheduling. Each process will receive a percentage of resources roughly equal to the percentage of tickets it has.

Lottery planning has several interesting properties. For example, if a process receives several tickets during creation, then in the next lottery its chances of winning are proportional to the number of tickets.

Cooperating processes can exchange tickets as needed. So, if a client process sends a message to a server process and then blocks, it can pass all of its tickets to the server process to increase the chance of starting the server. When the server process terminates, it can return all tickets back.

Fair planning.

So far, we have assumed that every process is managed, regardless of who its owner is. Therefore, if user 1 creates 9 processes and user 2 creates 1 process, then using round-robin scheduling or in the case of equal priorities, user 1 will get 90% of the processor, and user 2 only 10.

To avoid situations like this, some systems pay attention to the process owner before scheduling. In such a model, each user gets some share of the processor, and the scheduler selects the process according to this fact. If in our example each of the users had

promised 50% of the processor, they will get 50% of the processor, regardless of the number of processes.

Planning in real-time systems.

Time plays an important role in real-time systems. Most often, one or more external physical devices generate input signals, and the computer must respond adequately to them within a given period of time.

Real time systems are divided into hard real-time systems , which means that there are strict deadlines for each task (they must be met), and flexible real-time systems , in which violations of the time schedule are undesirable, but acceptable. In both cases, the program is divided into several processes, each of which is predictable. These processes are most often short and complete their work within a second. When an external signal appears, it is the scheduler who must enforce the schedule.

External events to which the system must respond can be divided into periodical(occurring at regular intervals) and non-periodic(arising unpredictably). There may be multiple periodic streams of events that the system must process. Depending on the time it takes to process each event, it may not be possible for the system to process all events in a timely manner.


Similar information.


Everything discussed in the previous few sections was more focused on further research on the problem of proper time of the process and much less on practical applications. Filling this gap, we present one of the methods for calculating the proper time of a process based on statistical data on its evolution.

Consider a one-dimensional process whose state is characterized by a real variable x. Let's assume that observations of the dynamics of the process are carried out in astronomical time t, so that t = t k and x = x k , k =1, ..., n are fixed observation moments and the corresponding values ​​of the process states. There are many different mathematical methods that make it possible to construct such curves that either pass through the points (t k , Xk) or approach them in the “best way”. The resulting functions x = x(t) give rise in our minds to the impression that the process under consideration depends on the mechanical movement of celestial bodies and, therefore, its state is expressed in terms of astronomical time t. Such a conclusion could be reckoned with; if there were no constant difficulties in trying to predict the further course of the process. For a large number of various processes that are not directly related to the mechanical motions of celestial bodies, the theoretical predictions obtained using the function x = x(t) outside the observation interval begin to deviate significantly from subsequent experimental data. The reason for the discrepancy between theory and experiment is usually explained by an unsuccessfully chosen processing method, but this may not be the essence of the matter.

Any process of interest to us takes place in the Universe. He, of course, "feels" the influence of the movement of celestial bodies. However, this impact may turn out to be “soft”, non-determining. This, in particular, can manifest itself in the fact that at certain intervals of the flow of astronomical time, the state of the process remains unchanged. In this connection, let us recall the example given earlier with a closed empty room, isolated from the outside world. We will let only one live fly into the room. Within a few days, changes in the state of the "room - fly" system will depend on the movement of the fly, since changes in the state of the room cannot be expected. At the same time, it is difficult to imagine that the behavior of a fly is strictly connected with the course of astronomical time.

Having made such a long digression, let's proceed to the description of the algorithm for calculating the process's own time.

In this algorithm, the unit of calculation of local maxima is chosen as a natural measure of time. In addition, possible sections of the stationary state of the process are taken into account, on which, as noted earlier, own time stops. Since we can speak about the identity of two states only within the limits of measurement accuracy, then in the future a certain positive number e is used - the allowable measurement error.

So, the input data for the algorithm are a natural number n, a positive number 8, arrays (tk) and (x k ), k = 1, ..., n. For programming convenience, the algorithm is presented in the form of four sequentially executed modules.

Module 1, using the data n, e, t k), (x k) , forms in general case new arrays 7 = (7+ X=(X t) and a quite specific accompanying array P = (?), where 1 = 1, ..., m, and m<Сп. Основное назначение этого модуля -- выявление в массиве x k) последовательностей идентичных состояний процесса, сохранение первых элементов в таких последовательностях и удаление всех остальных и, наконец, уменьшение по определенному, правилу исходного интервала наблюдения от t до на сумму тех промежутков времени, в которых процесс протекает стационарно.

Module 1 includes the following procedures:

p:=1, t:=0, k:=1.

In p.p. 1, 2 counters are introduced with specific initial values:

In p.p. 3, 4, the counters increase by 1.

Check condition k^n. If it is fulfilled, then go to step 6, otherwise go to step 11.

Check the inequality x k --x k = e. If it holds, then go to step 7, otherwise go to step 9.

7. tii = ti - (tkl - tk), i = k1, ..., n.

This procedure means that if the values ​​of Xk and Xk 1 are indistinguishable within the error, then all times starting from tk are reduced by the amount tki-tk.

p = p. Return to point 4.

Tv = t k ; Xv:=x k ; p = p v = v+l., i.e. the elements of the arrays T, X, P are formed and the next value v is assigned.

  • 10. Take (t k , ..., t n AND (Xk, - X n) as the original arrays of dimension n--k 1 + 1 and then return to step 2.
  • 11. Print m, (T), (X,) and (P,), where i = l, ..., i.e. End.

Let us explain the meaning of the elements of the accompanying array P. It follows from the previous text that the value of pk is equal to the number of those elements of the array (xk) that immediately follow and differ from x pi+ ...+, + by less than e. Note also that that pi+ ... +p m = n.

Example 1 Given: n = 20, (/*) = (2, 4, 7, 10, 12, 13, 15, 17, 20, 22, 24, 25,

  • 27, 30, 32, 33, 34, 35, 36) and (x,) = (4, 4, 6, 6, 6, 3, 2, 4, 3, 3, 3, 2, 2, 4, 5 , 5,
  • 5, 4, 3), see fig. 9, a.

As a result of the execution of module 1, m = 11 is obtained,

(D) \u003d (2, 3, 4, 6, 8, 11, 1-2, 15, 17, 18, 19); (X,) \u003d (4, 6, 3, 2, 4, 3, 2, 4,5,4,3)

u(e) = (2, 4, 1, 1, 1,3, 2, 1,3, 1, 1), see fig. 9, b.

Module 2. The input data for it are a natural number m, as well as arrays (7+ (X L ), = 1, ..., m. This module in the array (TJ) detects time points [TM a ], 1 = 1 m (ml

Example 2. The values ​​m, (Tb) and (X,] are borrowed from the previous example. After executing module 2, ml = 3, m2 = 8, (W,) = (3, 8, 17), (T*) = (3, 4, 6, 8, 11, 12, 15, 17), see also Fig. 9b.

Module 3 Input data ml, m2, (ТМ n ), 1 = 1, ..., ml, (Г*), /2=1, ..., r2.

This module is designed to build an array (t (-r) according to the formula

Where is TV 6 [TMP, TMn+i]

The variable m is the proper time generated by the change in the variable x. Its natural measure is the unit of calculation of local maxima.

Example 3. The initial data for T 2) are the same as the values ​​ml, m2 ITM, and in example 2. . After appropriate calculations, we get N = (0; 0.2; 0.6; 1; 1,33; 1,78; 2).

Module 4 Forms the output of results by establishing a correspondence between the values ​​of m and the elements x from the array (xk).

Example 4. Based on the data of examples 2 and 3, the following result is produced, see fig. 9, in:

t: 0; 0.2; 0.6; one; 1.33; 1.44;

x: 6; 3; 2; four; 3Т 0 2;

Thus, the considered algorithm makes it possible to develop the concept of the proper time of the process based on the information recorded on the astronomical time scale about the change in the state of the process. It is quite clear that one can use other algorithms based, for example, on the calculation of a sequence of local minima or a mixed sequence consisting of local maxima and minima. When processing experimental data, one should probably test different variants. If, for some reason, the experimenter chose one of the specific proper times and received arrays (m4 and (xk), then at the next stage he should use some mathematical methods to approximate the experimental points (m*, x) some approximate world line of the process x = x(t) By extrapolating this line beyond the limits of the initial observation interval, it can give predictions about the further course of the process.

It is interesting to mention a computational experiment intended to assess the prospects of using the proposed algorithm. As an experimental material, data on the annual runoff of the river were chosen. Vakhsh (Tajikistan) for the previous 40 years. For the same period of time, information was taken on the dynamics of the Wolf number - the most commonly used integral index of solar activity. The latter was taken as the basis for developing the proper time of the solar activity process. By the new time, information on the expenses of the river was converted. Vakhsh and then, over the observation interval, a theoretical dependence of the water flow rate is given as a function of the proper time of solar activity. A characteristic feature of the resulting graph is the almost periodic behavior of the maximum and minimum costs. The costs, however, do not remain constant.

Often, developers, especially inexperienced ones, get lost when they are asked to set deadlines for tasks. However, the ability to plan is a very useful and necessary skill that helps not only in work, but also in life. We decided to learn from the experts how to learn how to properly plan and deliver projects on time.

Brief conclusions can be found at the end of the article.

A developer usually needs to consider several parameters at once in order to estimate the time to complete a task:

  1. Experience in performing such tasks and working with this technology stack. If you have to do something fundamentally new, you need to be especially careful with the assessment.
  2. Experience with this client. Knowing the customer, you can roughly predict some additional requirements and the amount of edits.
  3. The quality of the code to work with. This is the most influential factor, because of which everything can be dragged out and generally not go according to plan. If there are tests in the project, only explicit dependencies everywhere, and the functionality is well isolated, everything is not so scary. It's much worse if you're dealing with legacy code with no tests, or code that's saturated with implicit dependencies. Things like “magic functions” (when it’s hard to see the final call stack from the code) and code duplication (when several independent sections need to be edited to change some functionality) can also complicate matters.

To learn how to adequately estimate the terms of work, you need to constantly practice. At the beginning of my work, I did exactly this: I estimated the time to complete any incoming task, even if no one demanded it, and then I looked at how accurately I managed to get into my estimate. In the process of completing the task, he noted which actions take more time. If something greatly increased the time, I remembered this moment and took it into account in the next assessments.

To an objective assessment of the time needed purely for work, a small margin should be added to cover force majeure situations. It is often estimated as a percentage of the completion of the main task, but it is different for everyone: someone adds 20% of the time, someone - 10%, and someone - 50%.

It is also useful to analyze the reasons for delays after each serious violation of the deadline. If you don’t have enough qualifications, you need to work on your weak points. If the problem was organizational - to understand what prevented normal work.

Upgrade Downgrade

, Technical Director of the Center for Innovative Technologies and Solutions, Jet Infosystems

A large number of articles are devoted to methods for assessing the complexity of a project, including the duration of work and individual tasks. However, this is still the cause of conflicts both within the project team and when communicating with the customer.

The main assistant in the assessment is experience. Try to somehow compare the new task with the ones already done. If you're doing a report, look at how long a similar report took in the past. If you are doing something new, try to break it down into known parts and evaluate them. If the task is completely new, allocate time for studying (even better - coordinate this time with the one who sets the task).

Pay attention to the accompanying steps - if you need to develop a service, then unit testing must also be included in the assessment (and maybe not only a unit), the preparation of test data will take some time. You should consider integration with other services, etc. Allow time to fix the defects that you find on your own or with the help of testers. A lot of time can be wasted on “invisible” tasks. For example, there is an assessment for development and there is an assessment for testing, but the transfer of an artifact for testing may be associated with the deployment of stands. Therefore, it is important to mentally imagine the whole process in order not to miss anything.

After determining the labor intensity, it is necessary to include new work in the calendar, not forgetting about other tasks and activities that go in parallel.

And don't forget that plans are worthless, but planning is priceless. Learn to correct plans in time, keep all stakeholders informed and escalate in a timely manner so that missed deadlines do not come as a surprise to anyone.

Upgrade Downgrade

A question that cannot be answered in a short form. If it were simple, then the problem of violation of deadlines would not exist.

To make development deadlines more predictable, you must first understand the reasons why programmers consistently make mistakes.

The first reason is that most of the tasks that a programmer does are unique to one degree or another. That is, most likely, the programmer will do such a task for the first time. He doesn't have a good idea of ​​how long this job will take. If this is a programmer with solid experience and he had to perform a similar task, his assessment will be closer to reality.

Let's use a simple analogy - if you've never dug a ditch, you can't say exactly how long it will take you to dig a trench 30 cm wide, 60 cm deep and 20 meters long. If you've been digging before, your estimated run time will be much closer to the actual run time.

The second reason is that programmers are optimists by nature. That is, considering the task, choosing an implementation option for it, giving an assessment of improvements, the developer expects that everything will work as he expects. And he does not think about the problems that he will meet on the way. Most of the time, he can't foresee them. For example, there is a task that a programmer can implement using a third-party open-source software library. At the evaluation stage, he found it on the Internet, read its description - it suits him. And he even correctly estimated the amount of his work to embed the use of this library. But he did not foresee at all that an error would occur in this library in the environment of his software product.

The developer will have to not only build the use of the library into their code, but also fix the error in the library itself. And often the developer does not provide time to correct their mistakes. Statistics show that testing and fixing bugs can take up to 50% of the time spent on coding. The figure depends on the qualifications of the developer, the environment, the development practices used (for example, unit tests significantly reduce this time and the total duration / labor intensity of the development task is less).

If we return to the analogy with the digger, then the digger did not expect that his shovel would break and he would have to spend two hours looking for a new cutting.

The third reason is unforeseen requirements. In no other area of ​​\u200b\u200bmaterials production, with which software development customers are so fond of comparing, there is such a stream of new requirements. Imagine the passage of a digger who dug 19 meters out of 20 and heard from the customer the wish that the ditch should not go in a straight line, but in a snake with a shoulder length of 97 centimeters.

How to deal with all this and how to live in conditions of such uncertainty? Reducing uncertainty and providing time slack.

The easiest way to bring your expectations closer to reality is to use the playful "Pi" rule of thumb. Having received an estimate from the developer (in terms of time or labor intensity), it is necessary to multiply it by the number Pi (= 3.14159). The more experienced the developer made the estimate, the lower this coefficient can be.

It is obligatory to practice the decomposition of the original task into small tasks no larger than 4 hours in size. The more detailed the decomposition is, the higher the chances that the estimate will be close to the actual complexity/duration.
If we return to the allocation of the reserve - this time should be allocated at the end of the project. It is bad practice to make a reserve and include it for every task. Parkinson's law "Work fills all the time allotted for it" is strictly observed.

If we sum up a short “total”, then in order to correctly determine the timing of the work, the following actions will be useful:

  • perform work decomposition, break the task into as detailed steps as possible;
  • carry out prototyping;
  • limit the implementation of previously unforeseen requirements. This does not mean that they should not be done, but it is advisable to highlight these requirements and agree with the customer on changes in the timing and cost for their implementation;
  • take into account the time to stabilize the solution;
  • use code quality improvement practices, such as writing unit tests;
  • make a general reserve.

Well, remember that if the fact exceeds your estimate by 30%, then this is a very good result.

Upgrade Downgrade

For the most accurate assessment, you need experience in real development, and in a specific area. But there are also general rules that will help to avoid mistakes in planning and problems when handing over work to the customer. I would describe these rules like this.

First, you need to understand the problem. This seems to be obvious and does not directly relate to the timing, but in fact it is a key point. Even in serious large projects, one of the main factors of failure and delay is the problem in determining the requirements. For novice developers, unfortunately, this is a serious problem - they don’t read the technical specifications or they read and understand very selectively (out of ten points, five were remembered and completed, and the rest were remembered already when submitting the result). It is clear that a misunderstood task cannot be correctly implemented on time.

Further - to estimate time for development. The peculiarity of programming is that there are no absolutely identical tasks. This makes our work more interesting, but estimating deadlines is more difficult. Decomposition works well here, i.e. dividing a complex unique task into a sequence of small familiar subtasks. And each of them can already be estimated in hours quite adequately. Let's sum up the estimates of subtasks - and get the estimate of the entire task.

As a rule, such an estimate includes only the costs of coding itself. This is certainly the most important part of the development, but far from the only one (and often not the most voluminous). The full completion of the task also includes reading and clarifying the TOR, meeting with colleagues or the customer, debugging and testing, compiling documentation, delivering the result (demonstration to the customer and possible alterations according to his comments). How much time it will take you to do these actions, only experience will tell. At first, it is important, at least, not to forget to take them into account in the calculations, and you can ask more experienced colleagues for a rough estimate of the time.

So, we take an estimate of the cost of coding, add an estimate of the cost of additional work - and get the desired estimate of the time to complete the task. But that's not all! You need to indicate the planned completion date for the task. It would be a mistake to simply take and divide the labor costs (in hours) by 8 hours and add to the current date. In real practice, a developer never (well, almost never) works 100% of the time on one specific task. You will definitely spend time on other work - important, but not directly related to the main one. For example, helping colleagues, training, reporting, etc. Usually, when planning, it is considered that 60-70% of the working time goes directly to work on the current project. Additionally, you need to take into account possible delays that will prevent you from continuously working on the task. For example, if for this you need to interact with other people (colleagues, customers), then take into account their employment, work schedule, etc.

Here are the basic rules that, in my opinion, will help the developer avoid problems in estimating and meeting deadlines. In addition, the key is the accumulation of own experience both in the implementation of tasks and in evaluation. For example, after completing a task, it is very useful to compare your initial estimate with the actual timeline and draw conclusions for the future. And, of course, it is worth studying someone else's experience. I would recommend S. McConnell's book "How much does a software project cost" and S. Arkhipenkov's "Lectures on software project management" on the topic.

Upgrade Downgrade

When assessing and scheduling, it is necessary to:

  1. Decompose the task into small functional pieces in such a way that there is a clear understanding of how long the development of each such piece will take.
  2. In parallel with the decomposition, there will definitely be additional questions about the functionality that was not described in the problem statement. It is necessary to get answers to such questions, since this directly relates to the scope of work and, consequently, the timing.
  3. Add a certain percentage of risks to the final assessment. This is determined by experience. You can start, for example, with risks of 10-15%.
  4. Understand how many hours a day a programmer is willing to devote to completing a task.
  5. We divide the final estimate by the number of hours we allocate per day, and we get the number of days required for implementation.
  6. We focus on the calendar and the required number of days to complete. We take into account weekends and other days when the programmer will not be able to work on the task, as well as the start date of work (the developer is not always ready to take the task to work on the same day). Thus, we get the start and end dates of the work.

Upgrade Downgrade

In our company, task planning always goes through several stages. On the business side, we formulate 5-6 strategic goals for the year. These are high-level tasks, for example, to increase any parameter by so many percent. Further, various departments of the company form business tasks for all IT teams. The deadlines for these tasks receive an initial rough estimate, which is often formed by all team members - the manager, analyst, developer, and tester. Having received this assessment, the business prioritizes tasks, taking into account the strategic goals of the company. Cross-cutting strategic goals help with this, with them it becomes obvious that we are all working for some common cause, there is no such situation when someone only pulls in their direction. We collect sprints from precisely estimated tasks by deadlines. For some teams they are quarterly, for some - monthly. Several tasks, according to the preliminary assessment, falling into the next sprint, the teams give an accurate assessment. Large tasks are divided into lower-level ones, for each of which a specific performer is responsible, it is he who gives an accurate estimate.

At this stage, it is important not to forget to add a margin of time to fix bugs, because only those who do nothing do not make mistakes. This is well understood by both the Product Owner and business customers. At the same time, the required margin of time must be adequate: no one will understand a developer who sets a simple task for too long a deadline, he will be asked to justify the decision. The most difficult thing is to explain to the business why it takes time to refactor. We are grateful to our company for the fact that we succeed from time to time, because in the end, refactoring leads to lightening the infrastructure and putting things in order in the code, which increases the stability of the system and can significantly speed up the development of new functions.

Sometimes errors in the assessment do occur. In my opinion, it is impossible for the development department in large companies with a developed infrastructure to completely avoid this. In this case, it is important that the developer informs his manager about what is happening in time, and he, in turn, has time to warn the business and “replay” something in the general plans of the company. In this mode, working is much more correct than frantically trying to do in 3 days what takes 5, and then drowning in a large number of errors that arose due to such a rush.

Upgrade Downgrade

The correct answer to both parts of the question [how to learn how to properly plan and deliver a project on time - Red.] - an experience. There are no other ways to "know Zen". According to decision theory, any accurate conclusions can be built only on the basis of an analysis of a number of already available data. And the more these data, the more accurate the final forecast and assessment.

In the words of Herbert Shaw: "Experience is a school in which a man learns what a fool he was before." A fairly simple conclusion follows from this: if the programmer already has experience that correlates with the task, he can rely on it, if not, on the experience of "colleagues in the shop."

Next, you need to understand that direct scheduling is a task that people do very, very poorly, especially in development. When evaluating due dates, it is considered good practice to introduce "adjustment factors" to the original estimate. This metric can range from 1.5 to 3, depending on the experience of the developer and the totality of the degrees of uncertainty of the tasks solved within the project.

Upgrade Downgrade

When determining the timing, it is important to consider many factors.

For example work experience. How clearly do you imagine the scope of the work ahead? Have you done something similar before? It is clear that the more experience, the faster the work will be done.

A well-written technical task plays a significant role in determining the timing. This is very difficult in our area. Often the client himself does not know what he wants, so I advise you to spend an extra day or two, but to get a clear idea from the client about the desired result. It is important that this representation is mutual. And only after that you can begin to negotiate the amount and terms.

Also, always take risks. For beginners, I recommend multiplying the estimated deadlines by two. After all, it is better to hand over the project ahead of schedule and grow as a specialist in the eyes of the customer, rather than hand it over later and ruin your reputation.

Upgrade Downgrade

The general recommendation is that a developer needs to learn how to correctly decompose tasks, always look for possible pitfalls, rely on their own experience and do not forget to warn customers and colleagues in time if the task cannot be solved within the specified time frame.

Building a clear plan is much more difficult than determining the deadline for completing a single task. At the same time, it is important not only to deliver the project on time, but also to make sure that the system you have developed correctly solves business problems. Here, IT teams are helped by various software development methodologies: from RUP and MSF to SCRUM and other Agile formats. The choice of tools is very extensive, and many of our customers want to understand in advance how we will work with them in the project, what principles we adhere to.

By the way, the topic of Agile today is becoming close to business, and even in individual projects to the public sector, since the principles of this methodology allow you to implement projects very quickly, managing customer expectations at each iteration. For example, in an Agile team there are practically no protracted discussions with the customer. Forget about dozens of pages describing unnecessary technical details, such as the speed at which a drop-down list appears. Give the customer the opportunity to try an intermediate version of the system, then it will become much easier for you to understand each other.

The agile team plans everything together and determines the optimal level of labor that will be needed to solve a particular problem. For example, one of the techniques is called "Poker Planning", where each participant anonymously gives his estimate of the necessary labor costs for a specific task. After that, the team determines the average weight of the task in story points or man-hours and distributes cases according to the “who likes what” principle. At the same time, every day the team gathers for a 15-minute rally, when everyone talks about the status of their current tasks in a couple of minutes, including reports of difficulties that have arisen. The team quickly eliminates the detected problem, so the customer looks at the next stage of the programmer's work as quickly as possible. The developers do not delay the deadlines for completing tasks due to their unwillingness to once again pull the team or futile attempts to figure it out on their own, killing precious time. By the way, on such mini-statuses, developers have a desire to prove themselves from the best side, to show that you take a responsible approach to work. It really motivates and self-disciplines.

Introduction

The purpose of the workshop on the organization of production is to expand and deepen theoretical knowledge, instill the necessary skills to solve the most common tasks in practice on the organization and planning of production.

The workshop includes tasks for the main sections of the course. At the beginning of each topic, brief guidelines and theoretical information, typical tasks with solutions and tasks for independent solution are presented.

The presence in each topic of guidelines and brief theoretical information allows you to use this workshop in distance learning.


Calculation of the duration of the production cycle

The duration of the production cycle serves as an indicator of the efficiency of the production process.

The production cycle- the period of stay of objects of labor in the production process from the moment the raw materials are launched to the moment the finished product is released.

The production cycle consists of working time, during which labor is expended, and break times. Breaks, depending on the reasons that caused them, can be divided into:

1) on natural or technological - they are due to the nature of the product;

2) organizational(breaks between shifts).

The duration of the production cycle consists of the following components:

T cycle = t those + t eat + t tr + t k.k. + t m.o. + t m.c.

where t those– time of technological operations;

t eat - time of natural processes (drying, cooling, etc.);

t tr - the time of transportation of objects of labor;

t c.c. - quality control time;

t m.o - interoperative waiting time;

t m.c. - time spent in intershop warehouses;

(t three t k.k. can be combined with t m.o).

The calculation of the duration of the production cycle depends on the type of production. In mass production, the duration of the production cycle is determined by the time the product is on the stream, i.e.

T cycle = t in M,

where t in- release stroke;

M- number of workplaces.

Under exhaust stroke should be understood as the time interval between the release of one manufactured product and the product following it.

The release cycle is determined by the formula

t in \u003d T eff / V,

where T ef- effective fund of working time for the billing period (shift, day, year);

AT- the volume of output for the same period (in natural units).

Example: T cm = 8 hours = 480 minutes; T lane = 30 min; → T eff \u003d 480 - - 30 \u003d 450 min.

H = 225 pcs; → t c = 450/225 = 2 min.

In serial production, where processing is carried out in batches, the duration of the technological cycle is determined not for a unit of production, but for the entire batch. Moreover, depending on the method of launching the batch into production, we get different cycle times. There are three ways of moving products in production: serial, parallel and mixed (series-parallel).


I. At consistent moving parts, each subsequent operation begins only after the previous one ends. The duration of the cycle with the sequential movement of parts will be equal to:

where n - the number of parts of the batch being processed;

t pcsi- piece rate of time for the operation;

C i- number of jobs i-th operation;

m– number of technological process operations.

Given a batch of products, consisting of 5 pieces. The batch is skipped sequentially through 4 operations; the duration of the first operation was 10 min; the second, 20 min; the third, 10 min; and the fourth, 30 min (Fig. 1).

Picture 1

T loop = T last = 5 (10+20+10+30) = 350 min.

The sequential way of moving parts has the advantage that it keeps the equipment running without downtime. But its disadvantage is that the duration of the production cycle in this case is the greatest. In addition, significant stocks of parts are being created at workplaces, which requires additional production space.

II. At parallel In the movement of the batch, individual parts are not detained at the workplaces, but are transferred piece by piece to the next operation immediately, without waiting for the processing of the entire batch to be completed. Thus, with the parallel movement of a batch of parts at each workplace, various operations are simultaneously performed on different parts of the same batch.

The processing time of a batch with parallel movement of products is drastically reduced:

dl .

where n n- the number of parts in transfer party(transport party), i.e. the number of products simultaneously transferred from one operation to another;

Length - the longest operating cycle.

With a parallel launch of a batch of products, the processing of parts of the entire batch is carried out continuously only at those workplaces where long operations follow short ones. In cases where short operations follow long ones, i.e. longer (in our example, the third operation), the execution of these operations is performed intermittently, i.e. idle equipment. Here, a batch of parts cannot be processed immediately, without delay, since the previous (long) operation does not allow this.

In our example: n= 5, t 1 = 10; t 2 = 20; t 3 = 10; t 4 = 30; With= 1.

T steam \u003d 1 (10 + 20 + 10 + 30) + (5-1) 30 \u003d 70 + 120 \u003d 190 min.

Consider the scheme of parallel movement of parts (Fig. 2):

Figure 2

III. To eliminate interruptions in the processing of individual parts of the batch in all operations, apply parallel-serial or mixed a start-up method in which the parts (after their processing) are transferred to the next operation individually, or in the form of "transport" backlogs (several pieces) in such a way that the operations are not interrupted at any workplace. In the mixed method, the continuity of processing is taken from the sequential one, and the transition of the part from operation to operation immediately after its processing is taken from the parallel one. With a mixed method of launching into production, the cycle time is determined by the formula

core .

where cor. - the shortest operating cycle (from each pair of adjacent operations);

m-1 number of combinations.

If the subsequent operation is longer than the previous one, or equal to it in time, then this operation is started individually, immediately after the first part has been processed in the previous operation. If, on the contrary, the subsequent operation is shorter than the previous one, interruptions occur here during piece transfer. In order to prevent them, it is necessary to accumulate a transport backlog of such a volume that is sufficient to ensure work at a subsequent operation. In order to practically find this point on the graph, it is necessary to transfer the last detail of the batch and set aside the duration of its execution to the right. The processing time of all other parts of the batch is plotted on the graph to the left. The start of processing the first part shows the moment when the transport backlog from the previous operation should be transferred to this operation.

If adjacent operations are the same in duration, then only one of them is accepted as short or long (Fig. 3).

Figure 3

T last pair \u003d 5 (10 + 20 + 10 + 30) - (5-1) (10 + 10 + 10) \u003d 350-120 \u003d 230 min.

The main ways to reduce the duration of the production cycle are:

1) Reducing the labor intensity of manufacturing products by improving the manufacturability of the manufactured structure, the use of computers, and the introduction of advanced technological processes.

2) Rational organization of labor processes, arrangement and maintenance of workplaces on the basis of specialization and cooperation, extensive mechanization and automation of production.

3) Reduction of various planned and unplanned breaks at work based on the rational use of the principles of the scientific organization of the production process.

4) Acceleration of the course of reactions as a result of an increase in pressure, temperatures, transition to a continuous process, etc.

5) Improving the processes of transportation, warehousing and control and combining them in time with the process of processing and assembly.

Reducing the duration of the production cycle is one of the serious tasks of the organization of production, because. affects the turnover of working capital, reducing labor costs, reducing storage space, the need for transport, etc.

Tasks

1 Determine the duration of the processing cycle of 50 parts with serial, parallel and serial-parallel types of movement in the production process. The process of processing parts consists of five operations, the duration of which, respectively, is, min: t 1 =2; t 2 =3; t 3 =4; t 4 =1; t 5=3. The second operation is performed on two machines, and each of the others on one. The size of the transfer lot is 4 pieces.

2 Determine the duration of the processing cycle of 50 parts with serial, parallel and serial-parallel types of movement in the production process. The process of processing parts consists of four operations, the duration of which, respectively, is, min: t 1 =1; t 2 =4; t 3 =2; t 4=6. The fourth operation is performed on two machines, and each of the others on one. The size of the transfer lot is 5 pieces.

3 A batch of parts of 200 pieces is processed with its parallel-sequential movement during the production process. The process of processing parts consists of six operations, the duration of which, respectively, is, min: t 1 =8; t 2 =3; t 3 =27; t 4 =6; t 5 =4; t 6=20. The third operation is performed on three machines, the sixth on two, and each of the other operations on one machine. Determine how the cycle time for processing a batch of parts will change if the parallel-sequential version of movement in production is replaced by a parallel one. The size of the transfer lot is 20 pieces.

4 A batch of parts of 300 pieces is processed with its parallel-sequential movement during the production process. The process of processing parts consists of seven operations, the duration of which, respectively, is, min: t 1 =4; t 2 =5; t 3 =7; t 4 =3; t 5 =4; t 6 =5; t 7=6. Each operation is performed on one machine. Transfer batch - 30 pieces. As a result of improved production technology, the duration of the third operation was reduced by 3 minutes, the seventh - by 2 minutes. Determine how the processing cycle of a batch of parts changes.

5 Given a batch of blanks, consisting of 5 pieces. The batch is skipped through 4 operations: the duration of the first is 10 minutes, the second is 20 minutes, the third is 10 minutes, the fourth is 30 minutes. Determine the duration of the cycle by analytical and graphical methods for sequential movement.

6 Given a batch of blanks, consisting of four pieces. The party is skipped through 4 operations: the duration of the first is 5 minutes, the second is 10 minutes, the third is 5 minutes, the fourth is 15 minutes. Determine the duration of the cycle by analytical and graphical methods with parallel movement.

7 Given a batch of blanks, consisting of 5 pieces. The batch is skipped through 4 operations: the duration of the first is 10 minutes, the second is 20 minutes, the third is 10 minutes, the fourth is 30 minutes. Determine the duration of the cycle by analytical and graphical methods for serial-parallel motion.

8 Determine the duration of the technological cycle for processing a batch of products from 180 pcs. with parallel and sequential variants of its movement. Build graphs of the processing process. The size of the transfer lot - 30 pcs. The norms of time and the number of jobs in operations are as follows.

(time from work does not become support until the first moment it starts executing on resources); minimizing delays or response time(off-time becomes enabled until completed in the case of intermittent activity, or until the system responds and the user's first exit hands in the case of interactive activity); or maximization justice(an equal amount of CPU time for each process, or more generally corresponding points in time according to the priority and workload of each process). In practice, these goals often conflict (for example, throughput versus latency), so the scheduler will make an appropriate trade-off. Preference is measured by any one of the issues mentioned above, depending on the user's needs and objectives.

OS/360 and successors

AIX

In AIX Version 4, there are three possible values ​​for the thread scheduling policy:

  • First, first out: Once a thread with this policy is scheduled, it runs to completion, unless it is blocked, it voluntarily yields control of the processor, or the higher priority thread becomes dispatched. Only fixed priority threads can have a FIFO scheduling policy.
  • Round Robin: This is similar to the AIX Version 3 scheduler scheme round robin based on 10ms time slices. When a PP thread has control at the end of the timeslot, it moves to the tail of the queue of threads with the same priority. Only fixed priority threads can have a Round Robin scheduling policy.
  • OTHER: This policy is defined by POSIX1003.4a in the implementation. In AIX Version 4, this policy is defined as equivalent to RR, except that it applies to non-fixed priority threads. Recalculating the priority value of a running thread on every interrupt means that a thread can lose control because its priority value has risen higher than another thread. This is the behavior of AIX Version 3.

Threads are primarily of interest to applications that currently consist of multiple asynchronous processes. These applications can impose a light load on the system if converted to a multi-threaded structure.