Efficient Power Management in Wireless Communication

Similar documents
Delay-minimal Transmission for Energy Constrained Wireless Communications

INTEGRATION of data communications services into wireless

Reinforcement Learning: A brief introduction. Mihaela van der Schaar

ENHANCING THE PERFORMANCE OF MANET THROUGH MAC LAYER DESIGN

Delay-Optimal Probabilistic Scheduling in Green Communications with Arbitrary Arrival and Adaptive Transmission

Volume 2, Issue 4, April 2014 International Journal of Advance Research in Computer Science and Management Studies

Error Control System for Parallel Multichannel Using Selective Repeat ARQ

IN distributed random multiple access, nodes transmit

Chapter 7 CONCLUSION

Throughput Maximization for Energy Efficient Multi-Node Communications using Actor-Critic Approach

AUTOMATIC repeat request (ARQ) techniques have been

Performance Analysis of Cell Switching Management Scheme in Wireless Packet Communications

Resource Sharing for QoS in Agile All Photonic Networks

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE Jie Luo, Member, IEEE, and Anthony Ephremides, Fellow, IEEE

Performance analysis of POMDP for tcp good put improvement in cognitive radio network

Com S 611 Spring Semester 2007 Discrete Algorithms for Mobile and Wireless Networks. Lecture 3: Tuesday, 23rd January 2007

Ad hoc and Sensor Networks Chapter 6: Link layer protocols. Holger Karl

Energy-Efficient Cooperative Communication In Clustered Wireless Sensor Networks

A Survey on Congestion Control and Maximization of Throughput in Wireless Networks

Exploiting Multi-User Diversity in Wireless LANs with Channel-Aware CSMA/CA

Analysis of Link-Layer Backoff Algorithms on Point-to-Point Markov Fading Links: Effect of Round-Trip Delays

EEEM: An Energy-Efficient Emulsion Mechanism for Wireless Sensor Networks

Efficient Dynamic Multilevel Priority Task Scheduling For Wireless Sensor Networks

IMPROVING THE DATA COLLECTION RATE IN WIRELESS SENSOR NETWORKS BY USING THE MOBILE RELAYS

Introduction to Real-Time Communications. Real-Time and Embedded Systems (M) Lecture 15

Mobile Edge Computing for 5G: The Communication Perspective

Wireless Sensornetworks Concepts, Protocols and Applications. Chapter 5b. Link Layer Control

A simple mathematical model that considers the performance of an intermediate node having wavelength conversion capability

CHAPTER 5. QoS RPOVISIONING THROUGH EFFECTIVE RESOURCE ALLOCATION

Comparison of pre-backoff and post-backoff procedures for IEEE distributed coordination function

Dynamic Network State Learning Based Power Management Model for Network Condition Aware Routing Protocol

Delay-Sensitive Dynamic Resource Control for Energy Harvesting Wireless Systems with Finite Energy Storage

Analysis of Throughput and Energy Efficiency in the IEEE Wireless Local Area Networks using Constant backoff Window Algorithm

Prioritization scheme for QoS in IEEE e WLAN

CHAPTER 5 PROPAGATION DELAY

Archna Rani [1], Dr. Manu Pratap Singh [2] Research Scholar [1], Dr. B.R. Ambedkar University, Agra [2] India

3108 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 6, JUNE 2010

IN mobile wireless networks, communications typically take

Markov Chains and Multiaccess Protocols: An. Introduction

OVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI

Energy Management Issue in Ad Hoc Networks

Quality Optimal Policy for H.264 Scalable Video Scheduling in Broadband Multimedia Wireless Networks

ITERATIVE COLLISION RESOLUTION IN WIRELESS NETWORKS

An Approach to Connection Admission Control in Single-Hop Multiservice Wireless Networks With QoS Requirements

On the Interdependence of Congestion and Contention in Wireless Sensor Networks

Intra and Inter Cluster Synchronization Scheme for Cluster Based Sensor Network

Quality-Assured Energy Balancing for Multi-hop Wireless Multimedia Networks via 2-D Channel Coding Rate Allocation

CSMA based Medium Access Control for Wireless Sensor Network

An Enhanced Scheme of Video Transmission Using Priority Based Fuzzy Scheduling in Wimax

QUALITY OF SERVICE PROVISIONING IN MANET USING A CROSS-LAYER APPROACH FOR ROUTING

All Rights Reserved 2017 IJARCET

High-Performance FIR Filter Architecture for Fixed and Reconfigurable Applications

QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

Bottleneck Zone Analysis in Wireless Sensor Network Using XOR Operation and Duty Cycle

Performance Analysis of Storage-Based Routing for Circuit-Switched Networks [1]

Practical Lazy Scheduling in Wireless Sensor Networks. Ramana Rao Kompella and Alex C. Snoeren

Energy Management Issue in Ad Hoc Networks

EP2200 Queueing theory and teletraffic systems

Application-Oriented Multimedia Streaming over Wireless Multihop Networks

Wireless Networks (CSC-7602) Lecture 8 (15 Oct. 2007)

TSIN01 Information Networks Lecture 3

Measurement of packet networks, e.g. the internet

Elimination Of Redundant Data using user Centric Data in Delay Tolerant Network

TO DESIGN ENERGY EFFICIENT PROTOCOL BY FINDING BEST NEIGHBOUR FOR ZIGBEE PROTOCOL

Framework for replica selection in fault-tolerant distributed systems

Ad hoc and Sensor Networks Chapter 13a: Protocols for dependable data transport

Payload Length and Rate Adaptation for Throughput Optimization in Wireless LANs

ISSN: X International Journal of Advanced Research in Electronics and Communication Engineering (IJARECE) Volume 6, Issue 1, January 2017

A Routing Protocol and Energy Efficient Techniques in Bluetooth Scatternets

Design and Implementation of detecting the failure of sensor node based on RTT time and RTPs in WSNs

Optimization of Bit Rate in Medical Image Compression

A CLASSIFICATION FRAMEWORK FOR SCHEDULING ALGORITHMS IN WIRELESS MESH NETWORKS Lav Upadhyay 1, Himanshu Nagar 2, Dharmveer Singh Rajpoot 3

Cover sheet for Assignment 3

Medium Access Control Protocols With Memory Jaeok Park, Member, IEEE, and Mihaela van der Schaar, Fellow, IEEE

Distributed power control in asymmetric interference-limited networks

Class-based Packet Scheduling Policies for Bluetooth

Delay Constrained ARQ Mechanism for MPEG Media Transport Protocol Based Video Streaming over Internet

Value Iteration. Reinforcement Learning: Introduction to Machine Learning. Matt Gormley Lecture 23 Apr. 10, 2019

DISCOVERING OPTIMUM FORWARDER LIST IN MULTICAST WIRELESS SENSOR NETWORK

Defending Against Resource Depletion Attacks in Wireless Sensor Networks

Admission Control in Time-Slotted Multihop Mobile Networks

Using Reinforcement Learning to Optimize Storage Decisions Ravi Khadiwala Cleversafe

Reservation Packet Medium Access Control for Wireless Sensor Networks

SIMON FRASER UNIVERSITY SCHOOL OF ENGINEERING SCIENCE. Spring 2013 ENSC 427: COMMUNICATION NETWORKS. Midterm No. 2(b) Monday, March 18, 2013

Data Offloading in Mobile Cloud Computing: A Markov Decision Process Approach

Research Article Optimization of Access Threshold for Cognitive Radio Networks with Prioritized Secondary Users

Degrees of Freedom in Cached Interference Networks with Limited Backhaul

Video Streaming Over Multi-hop Wireless Networks

The Novel HWN on MANET Cellular networks using QoS & QOD

Analysis of Power Management for Energy and Delay Trade-off in a WLAN

Increasing Node Density to Improve the Network Lifetime in Wireless Network

Dynamic Control and Optimization of Buffer Size for Short Message Transfer in GPRS/UMTS Networks *

An Efficient Adaptive Layer Switching Algorithm for Video Transmission over Link- Adaptive Networks

An Approach for Enhanced Performance of Packet Transmission over Packet Switched Network

A Literature survey on Improving AODV protocol through cross layer design in MANET

EMERGING multihop wireless LAN (WLAN) networks

Resource Allocation for Multiple Classes of DS-CDMA Traffic

Chapter -5 QUALITY OF SERVICE (QOS) PLATFORM DESIGN FOR REAL TIME MULTIMEDIA APPLICATIONS

ENERGY EFFICIENT MULTIPATH ROUTING FOR MOBILE AD HOC NETWORKS

Content Caching and Scheduling in Wireless Networks with Elastic and Inelastic Traffic B.Muniswamy, Dr. N. Geethanjali

Transcription:

Efficient Power Management in Wireless Communication R.Saranya 1, Mrs.J.Meena 2 M.E student,, Department of ECE, P.S.R.College of Engineering, sivakasi, Tamilnadu, India 1 Assistant professor, Department of ECE, P.S.R.College of Engineering,sivakasi, Tamilnadu, India 2 Abstract Data transmission in point to point wireless communication degrades the energy efficiency. One of the primary solutions to overcome this problem is to minimize the power consumption. Here, the delay sensitive data (e.g., multimedia data) are transmitted over a wireless channel. Delay sensitive communication system operates in dynamic environment conditions. A new Q-learning algorithm which dynamically adapts to the environment to achieve power management.the Q-learning based power management is more flexible and highly adaptive. It is a simple update step performed at the end of each time slot. The dynamic power management is to reduce power consumption without affecting the overall performance of the device. But only the disadvantage of this algorithm is holding cost and packet overflows are increased. In order to reduce the cost and energy consumption the transmission scheduling algorithm is proposed. This type of algorithm schedules the amount of data for desired users with a view of minimizing the energy consumption of a wireless device and prolonging its battery lifetime. Keywords Energy Efficient Wireless Communication, Dynamic Power Management, Power-Control, Adaptive Modulation and Coding, Markov Decision Process, Reinforcement Learning. I. INTRODUCTION Wireless communication is the transfer of information between two or more points that are not connected by an electrical conductor. Delay sensitive communication system operates in dynamic environment conditions (e.g., fading channel) and dynamic traffic loads (e.g,. variable bit-rate). In such systems, the primary concern has typically been the reliable delivery of data to the receiver within a tolerable delay. Increasingly, however, battery-operated mobile devices are becoming the primary means by which people consume, author, and share delay-sensitive content (e.g., real-time streaming of multimedia data, videoconferencing, gaming etc.). Consequently, energy-efficiency is becoming an increasingly important design consideration. To balance the competing requirements of energy-efficiency and low delay, fast learning algorithms are needed to quickly adapt the transmission decisions to the time-varying and a priori unknown traffic and channel conditions. In this paper, let as consider one type of cost (delay, throughput, etc...) is to be minimized while keeping the other types of costs (power, delay, etc.) below some given bounds. Posed in this way, our control problem can be viewed as a constrained optimization problem over a given class of policies. Telecommunications networks are designed to enable the simultaneous transmission of different types of traffic: voice, file transfers, interactive messages, video, etc. Typical performance measures are the transmission delay, power consumption, throughput, transmission error probabilities, etc. Different types of traffic differ from each other by their statistical properties, as well by their performance requirements. For example, for interactive messages it is necessary that the average end-to-end delay be limited. Strict delay constraints are important for voice traffic; there, we impose a delay limit of 0.1 second. When the delay increases beyond this limit, it becomes quickly intolerable. For non-interactive file transfer, we often wish to minimize delays or to maximize throughput. Copyright @ IJIRCCE www.ijircce.com 3465

A.PHYSICAL LAYER: ADAPTIVE MODULATION AND POWER-CONTROL Power control is the intelligent selection of transmit power in a communication system for achieving best performance within the system. The physical layer is assumed to be a single-carrier single-input single-output (SISO) system with fixed symbol rate of 1/Ts (symbols per second). The transmitter sends at a data rate β /Ts (bits/s) to the receiver, where β 1 is the number of bits per symbol determined by the modulation scheme in the AMC component shown in Fig. 1. All packets are assumed to have packet length L (bits) and the symbol period Ts is fixed. The proposed framework can be applied to any modulation and coding schemes. Our only assumptions are that the bit-error probability (BEP) at the output of the maximum likelihood detector of the receiver, denoted by BEP, and the transmission power, denoted by P, can be expressed as and BEP = BEP(h, P,z ) (1) P = P (h, BEP, z ) (2) Where zn is the packet throughput in packets per time slot. Assuming independent bit-errors, the packet loss rate (PLR) for a packet of size L can be easily computed from the BEP as PLR = 1 (1 BEP ). Fig 1: Wireless Transmission System Copyright @ IJIRCCE www.ijircce.com 3466

(i). Transmission Data Buffer: The transmission buffer is a first-in first-out queue. Each packet is of size L bits and the arrival process {n=0,1 } is assumed to be independent and identically distributed.the arriving packets are stored in a finite-length buffer, which can hold a maximum of B packets. The packet transmitted without error, which depends on the throughput.if the buffer is stable, then the holding cost is proportional to the queuing delay. If the buffer is not stable then the overflow cost imposes a penalty of efficiency for each dropped packet. Although PHY-centric solutions are effective at minimizing Transmission power, they ignore the fact that it costs Power to keep the wireless card on and ready to transmit; therefore, a significant amount of power can be wasted even.when there are no packets being transmitted. System-level Solutions address this problem. (ii). Power Control: The set of channel states H is discrete and finite, and that the channel state is constant for the duration of a time.the packet length L (bits) and the symbol period Ts is fixed. The packet throughput, which determine the number of bit per symbol and channel state h(n),transmission power p(tx).the Bit-error probability calculated transmission power parameter function. (iii).power management: Power consumption has become a major concern in the design of computing systems today. High power consumption increases cooling cost, degrades the system reliability and also reduces the battery life in portable devices. Modern computing/communication devices support multiple power modes which enable power and performance tradeoff. Dynamic power management (DPM) has proven to be an effective technique for power reduction at system level. The Q-learning which determine the active transmission of power Management. The power can be managed in to the x= {on, off}.the two power management actions in the set y={s-on, s-off}.the power consumed by the wireless card in the on and off states respectively, and p (tx) watts. (iv). Stochastic Optimizer: Stochastic optimization (SO) methods are optimization methods that generate and use random variables. The injected randomness may enable the method to escape a local minimum and eventually to approach a global optimum. B. CONVENTIONAL Q-LEARNING ALGORITHM Q-learning assumes that the unknown cost and transition probability functions depend on the action. Q- learning obvious what the best action is to take in each state during the learning process. Q can be learned by randomly exploring the available actions in each state. Q-learning updates, the action-value function for a single state action pair in each time slot. It, does not exploit known information about the system s dynamics. Using Q-learning, it is not obvious what the best action is to take in each state during the learning process. On the one hand, Q can be learned by randomly exploring the available actions in each state. Unfortunately, unguided randomized exploration cannot guarantee acceptable runtime performance because suboptimal actions will be taken frequently. On the other hand, taking greedy actions, which exploit the available information in Qn, can guarantee a certain level of performance, but exploiting what is already known about the system prevents the discovery of better actions. Many techniques are available in the literature to judiciously tradeoff exploration and exploitation. In this Paper, we use the so-called "-greedy action selection method, but other techniques such as Boltzmann exploration Can also be deployed. Importantly, the only reason why exploration is required is because Q-learning assumes that the unknown cost and transition probability functions depend on the action. In the remainder of this section, we will describe circumstances under which exploiting partial information about the system can obviate the need for action exploration. Q-learning is a model-free reinforcement learning technique. Specifically, Q-learning can be used to find an optimal action-selection policy for any given (finite) Markov decision process (MDP). It works by learning an action-value function that ultimately gives the expected utility of taking a given action in a given state and following the optimal policy Copyright @ IJIRCCE www.ijircce.com 3467

thereafter. When such an action-value function is learned, the optimal policy can be constructed by simply selecting the action with the highest value in each state. One of the strengths of Q-learning is that it is able to compare the expected utility of the available actions without requiring a model of the environment. Additionally, Q-learning can handle problems with stochastic transitions and rewards, without requiring any adaptations. It has been proven that for any finite MDP, Q- learning eventually finds an optimal policy. Where, s and a are state and action performed in time slot, respectively. c is the corresponding cost. a is the greedy action in state. (3) Fig 2: State Diagram Since Q-learning is an iterative algorithm, it implicitly assumes an initial condition before the first update occurs. A high (infinite) initial value, also known as "optimistic initial conditions", can encourage exploration: no matter what action will take place, the update rule will cause it to have lower values than the other alternative, thus increasing their choice probability. Recently, it was suggested that the first reward could be used to reset the initial conditions. According to this idea, the first time an action is taken the reward is used to set the value of. This will allow immediate learning in case of fix deterministic rewards. Surprisingly, this resetting-of-initial-conditions (RIC) approach seems to be consistent with human behaviour in repeated binary choice experiments. C.PROPOSED METHOD The proposed algorithm is based on energy-efficient opportunistic transmission scheduler considering the following two different approaches: 1) the minimization of the expected energy consumption (E2OTS I) and 2) the minimization of the average energy consumption per unit of time (E2OTS II). The proposed method deals with minimizing the energy consumption by using transmission scheduling algorithm. This type of algorithm schedules the amount of data for desired users with a view of minimizing the energy consumption of a wireless device and prolonging its battery lifetime. The lifetime of the batteries in wireless networks depends on the energy consumption of the devices. This energy consumption Copyright @ IJIRCCE www.ijircce.com 3468

is effectively minimized using the multilevel queue scheduling. The energy consumption depends on the channel state because the channels are time variant. To achieve higher utilization of energy in wireless communications by exploiting good channel conditions. To accomplish this goal, we use distributed opportunistic transmission scheduling to prolong the battery lifetime of a wireless device. Therefore, according to the proposed scheduling techniques, we postpone communication until we find the best expected channel conditions to transmit, also taking into account a given tolerable time deadline and a required power level at the receiver..compare to the conventional algorithm power consumption are minimized in scheduling algorithm. D.RESULT AND IMPLEMENTATION Compare to conventional q-learning algorithm the power consumption is minimized energy efficient transmission scheduling algorithm. The power minimization in conventional Q-learning algorithm is 33% The power minimization in transmission-scheduling algorithm is 41% Output for conventional Q-learning algorithm Fig 3: Cumulative average power versus timeslot Output for transmission scheduling algorithm Fig 4: Impact of the CSI acquisition s energy consumption in the total average power consumption. Copyright @ IJIRCCE www.ijircce.com 3469

Fig 5: Impact of the CSI acquisition s energy consumption in the average performance. II.CONCLUSION Data transmission in point to point wireless communication degrades the energy efficiency. One of the primary solutions to overcome this problem is to minimize the power consumption with delay constraints. we have contributed minimal power consumption using transmission scheduling algorithm when compared to conventional Q-learning algorithm through MATLAB simulation. REFERENCES [1] D. Rajan, A. Sabharwal, and B. Aazhang, Delay-Bounded Packet Scheduling of Bursty Traffic over Wireless Channels, IEEE Trans.Information Theory, vol. 50, no. 1, pp. 125-144, Jan. 2004. [2] R. Berry and R.G. Gallager, Communications over Fading Channels with Delay Constraints, IEEE Trans. Information Theory, vol. 48, no. 5, pp. 1135-1149, May 2002. [3] N. Salodkar, A. Bhorkar, A. Karandikar, V.S. Borkar, An On-Line Learning Algorithm for Energy Efficient Delay Constrained Scheduling over a Fading Channel, IEEE J. Selected Areas in Comm., vol. 26, no. 4, pp. 732-742, Apr. 2008. [4] M.H. Ngo and V. Krishnamurthy, Monotonocity of Constrained Optimal Transmission Policies in Correlated Fading Channels with ARQ, IEEE Trans. Signal Processing, vol. 58, no. 1, pp. 438-451, Jan. 2010. [5] F. Fu and M. van der Schaar, Structural-Aware Stochastic Control for Transmission Scheduling, technical report, http://medianetlab.ee.ucla.edu/papers/uclatechreport_03_11_2010.pdf, 2012. [6] N. Mastronarde and M. van der Schaar, Fast Reinforcement Learning for Energy-Efficient Wireless Communications, technical report, http://arxiv.org/abs/1009.5773, 2012. [7] N. Salodkar, A. Karandikar, V.S. Borkar, A Stable Online Algorithm for Energy-Efficient Multiuser Scheduling, IEEE Trans. Mobile Computing, vol. 9, no. 10, pp. 1391-1406, Oct. 2010. Copyright @ IJIRCCE www.ijircce.com 3470