Dissertation for Master's Degree in Computer Engineering
Software Trend
Abstract:
With the advancement of hardware and then operating systems, followed by software, the demand for more services and higher speed and power has also increased, and this situation has reached a point where users cannot run their desired software without suitable hardware. With the production and creation of higher versions and different software, it is produced and created in such a way that the ability to change the relevant hardware at the same speed will not be possible for users due to the excessive cost, therefore clouds were created to provide software and services. It was activated on them and users can use their service by paying a small fee and without worrying about data loss and hardware failures. Therefore, clouds need software to control resources and services and various other hardware requests from users. This category is divided into different parts, which is one of the cases of scheduling work in clouds. In this thesis, we have tried to create and optimize a plan so that it has the most efficiency with the lowest cost when dividing different tasks to different clouds. Chapter One Generalities Research
Introduction
The discussion of work scheduling in operating systems has been and will be one of the most important discussions because the solution that can implement the most optimal method with the least amount of time has always been and still is. This topic has appeared much more prominently in the clouds, because here the work is sent from several users even in different geographical locations with different requests and these requests are managed in such a way that each of them has different services and it is answered in the most optimal time. Therefore, we have discussed and investigated the topic of scheduling tasks in cloud computing and we have tried to provide an algorithm that provides a more optimal solution according to the time limit and the difference in hardware.
1-2 statement of the problem
Cloud processing is a far-reaching dream in performing calculations, which is now a new diagram in the field of processing with large scales that can measure a large amount of processing resources. And even unsatisfied and virtually check and publish the data with the least processing and time and answer the user's request.
One of the main applications of cloud processing from an economic point of view is that the user only uses what he needs and only pays for what he actually used and the resources are available at any time and in any situation through the cloud (Internet). Here, data centers use a significant and increasing amount of energy, which on average consumes as much energy as 25,000 home systems in a normal data center, and the two important issues are the time to respond to users' requests, which must be responsive at least when it is important to users, and the system must be responsive and maintain a real-time mode. For example, a user who is playing through cloud servers, when he shoots, the impact of the bullet should be displayed for him in a certain period of time so that he can plan his next move. and execute In general, in cloud processing systems, several reasons are very important for the implementation of user requests, in which resources, reliability, reducing energy consumption and response time are very important in the entire system, and by using different scheduling algorithms, it is tried to create the best and most optimal algorithm so that the best balance can be created between the desired items. 1-3 Importance and necessity of research 1-3-1 Operating system
Operating system or operating system: It is a software that takes over the management of computer resources and provides a platform for application software to run and use its services. The operating system is one of the most essential softwares of a computer system. The operating system provides services to applications and users. Applications have access to these services either through application programming interfaces [1] or through system calls. By calling these interfaces, applications can request a service from the operating system, pass parameters, and receive an operation response.Users may interact with the operating system through some type of software user interface, such as a command line interface or a graphical user interface. For desktop and handheld computers, the user interface is generally considered part of the operating system. In large, multi-user systems such as Unix and Unix-like systems, the user interface is usually implemented as an application that runs outside the operating system. Examples of the most popular modern operating systems include Android, BSD, iOS, Linux, OS X, KWANX, Microsoft Windows, Windows Phone, and ZDOS.
Existing theories and algorithms tried their best to create a balance between the resources used and energy consumption and higher efficiency, but none of the algorithms and methods could work. achieve very high with very low energy consumption. Each normal data center consumes an average of 25,000 home systems energy, and on the other hand, users' requests cannot be executed late or incompletely. The balance between the above matters has become a matter of considering the reliability of the system and their security so that everyone is looking for a solution that can try to find a way to put all the things together and provide a suitable solution so that you can get the most efficiency from the least resources. It is a multi-tasker that is usually used as a controller in a specific application. In this case, the system must give the desired answer at a certain time. Control systems for scientific experiments, medical imaging, industrial control and some display systems are from this category. The main purpose of using real-time systems is a quick and guaranteed response to an external event. In real-time systems, there are usually no secondary storage devices and ROM memories are used instead. Advanced operating systems are also not present in these systems because the operating system separates the user from the hardware and this separation causes uncertainty in the response time. Systems where a timeout [4] must be answered are called hard realtime and systems that do not support timeout are called soft realtime. From the application of hard real-time systems, we can mention the control of a car's engine (a delayed response can bring disastrous results) and in soft real-time systems, we can mention barcode scanning at the store terminal (although the response speed should be fast, but not as fast as hard systems). proposed PJSC called A Priority based Job Scheduling Algorithm in Cloud Computing, which categorizes jobs based on priority, and this algorithm, according to them, includes many different criteria for decision making. Multi-user operating system Multi-user systems allow multiple users to access a computer system at the same time. Time-sharing and web-based systems can be classified as multi-user systems. In time-sharing systems, there is only one processor, which is switched between different user programs at a high speed by scheduling mechanisms, and therefore each user thinks that the entire computer is at his disposal.
Single-processor operating system
These types of operating systems are the fourth generation (current generation) operating systems that run on one processor. Such as XP, Vista, 98, Me, which are mostly products of Microsoft.
Network operating system
Operating systems such as Novel Net, which is the most used and the features of this operating system for the network
Distributed operating systems
These operating systems introduce themselves to the user like single processor operating systems, but in practice they use multiple processors. This type of operating system runs in a network environment. In this type of system, after running a program on different computers, the final answer returns to the user's main system. The processing speed in this type of system is very high.
The clouds are distributed operating systems.