Saturday, May 2, 2020

Hypothesis Testing and Changepoint Detection †MyAssignmenthelp.com

Question: Discuss about the Hypothesis Testing and Changepoint Detection. Answer: Introduction: Cryptography is a process using which information can be converted into a format that cannot be read normally. The purpose of cryptography is to conceal any secret message from any unwanted viewer and only the intended recipient will be provided with the method to convert it to readable text (Stallings Tahiliani, 2014).A data that can be read and understood without the involvement of any special technique is known as plaintext. The method of masking a plaintext with the intention to hide its contents is called encryption.Encryption of a plaintext results in an unreadable format known as ciphertext.Symmetric key encryption is a type of cryptography method that involves the sharing of a secret key to encrypt and decrypt the data (Agrawal Mishra, 2012). Symmetric encryption algorithms have high efficiency in processing large amounts of information and computationally less intensive compared to the process of asymmetric encryption algorithm.Stream ciphers and block ciphers are the two types of symmetric key encryption algorithm that provides bit-by-bit and block encryption respectively. Data Encryption Standard (DES), Triple Data Encryption Standard (3DES), Advanced Encryption Standard (AES) management and BLOWFISH are examples of various symmetric key algorithms (Surya Diviya, 2012). Public Key Encryption also known as Asymmetric Encryption is a type of Cryptography that employs two keys for the process of encrypting and decrypting a data.One key is used for the encryption of a data; however, it cannot be used for decrypting the encrypted data. Similarly, another key is used for decrypting the data, which cannot be used for encrypting the same (Wee, 2012). The two keys are public key and private key. Public key is used for encrypting the data and is in possession of both the sender and the recipient while the private key is possessed only by the receiver of the message. To decrypt an encrypted message the recipient will require using both the public and private key.The public key is open to all however, it alone cannot be used to decrypt the coded message (Hofheinz Jager, 2012). Therefore, the encryption is secure. Both the public and private keys are based on very lengthy prime numbers. There is almost infinite amount of prime numbers available, which creates a n infinite amount of possibility for creating such keys. This enhances the security of the system extremely.An example can be cited in support of the topic mentioned above. Suppose a person A sends an encrypted data to person B. A encrypts the data with a public key and sends the data to B with the key (Hsu, Yang Hwang, 2013). B already has the private key to decode the data in possession and after receiving the package decodes the data using both the public and private key. Any intermediate viewer can see the package however; will be unable to decode the data, as they do not possess both the public as well as the private key (Jeeva, Palanisamy Kanagaram, 2012). The term hashing typically means the process of reducing any object from its original dimension. In technological perspective, hashing signifies the conversion of a string characters into a smaller value or key of fixed length that represents the original string.The purpose of hashing is to summarise and retrieve items in a database, as it is a quicker process to locate the object using the shorter key than to find it using the original value. Hashing finds its utility in many encryption algorithms as well (Park et al., 2012). All the techniques mentioned above are encryption techniques that are used for maintaining confidentiality and authentication. However, the public key encryption method is best suited to maintain confidentiality and authentication as it has two encryption keys involved in the process of encryption and decryption that uses a long set of prime numbers. This method increases the complexity level of encryption thereby enhancing the security of the confidential data. Denial-of-Service or DoS is a form of cyberattack that is carried out on any personal or organisational network management. The purpose of this type of attack generally is not to expose or retrieve any type of confidential information from the network; rather it is used to cause a great deal of issue for the users in a network. A DoS attack uses the technique of flooding a network with requests that increases the traffic of the network. This results in slow network connection and websites failing to load properly (Gunasekhar et al., 2014). In an organisation, a DoS attack can clod the network because of which, the employees in the organisation will be unable to access any web services to perform organisational operation. The DoS is effective in rendering the services of an organisation offline that can cause loss in business and negative publicity of the same. A DoS attack is almost impossible to stop occurring in an organisation, especially the advanced DDoS (Distributed Denial of S ervice) attack that is botnet-driven. It is almost impossible to identify an infected request from a legitimate request as these requests often use the same protocols or ports and may have resemblance to a legitimate request in respect to its content.However, some precautions can be taken to prevent DoS attacks in an organisation (Durcekova, Schwartz Shahmehri, 2012). The organisation can purchase a lot of bandwidth. It is an expensive process though but is the simplest process as well. Implementing a lot of bandwidth, the organisation will create a situation where the attacker will face difficulties in carrying out a successful DoS attack. This is because the more bandwidth a network has the more the attacker must clog by flooding with requests (Malekzadeh, Ghani Subramaniam, 2012). Another method of precaution is using DoS attack identification and detection techniques like wavelet-based signal analysis, activity profiling and change-point detection that will help to recognise a malicious traffic from a legitimate one. In such process, the first task is to determine the accurate moment of the attack. This is possible by using activity-profiling technique that helps to calculate the average rate of traffic and marks significant increase in the traffic rate.An organization that can detect a DoS attack can also determine the type of DoS attack that is going to be carried out on the organization (Fachkha et al., 2012). Another robust mechanism for the detection of a DoS attack is Change-Point Detection or Change-Point Monitoring (CPM) system.A CPM uses the inbuilt protocol behaviours for detecting a DoS attack. CPM is not dependent on traffic flow rates or specific applications as the protocol behaviours are identified exclusively by the specifications of the proto col and the service models of Internet applications (Tartakovsky, Polunchenko Sokolov, 2013). CPM is not sensitive to traffic patterns and sites as it relies on the non-parametric CUSUM method, thus resulting in a robust CPM, making it much more generally applicable and an easier deployment.CPM plays a dual role in detection of DoS attacks, the first mile CPM and the last mile CPM. As the first mile CPM has proximity to the sources of flooding, it alarms about the ongoing DoS attack and helps to reveal the source from which the flooding is being originated. The simplicity of CPM is due to its low computation overhead and statelessness (Tartakovsky, Nikiforov Basseville, 2014). The last and the most obvious method of precaution from DoS attacks is implementing strong and sophisticated firewall in the network along with other Network Security Software that may be able to detect or warn before the occurrence of a DoS attack in the organisation (Raiyn, 2014). Some rules are required to be followed while working in secure areas as the protection and privacy of the area should be of the utmost priority. The following is a list of some rules that is necessary to be followed: The organisation shall manage the activities of every individual who enter an important area such as computer rooms, control centres, data storage rooms and important server locations (Peltier, 2016). The organisation must prohibit access of any media that uses an external interface connection within a secret area without a prior legitimate approval provided by the accreditation authority (Peltier, 2016). The organisation should prohibit the access of any media that uses an external interface connection in a top-secret area without in possession of a written authorisation from the accreditation authority (Peltier, 2016). The areas that have special sensitive sectors should be designated with proper symbols and have tight security deployed in all entry and exit points that leads to the secure area (Peltier, 2016). The behaviour of the personnel on premises of restricted areas should be governed by formal policies or guidelines that should include prohibition of eating, drinking and smoking; rules regarding usage of devices that generate radio frequency such as cell phones near sensitive equipment and when can storage devices be used (Peltier, 2016). Methodologies regarding working in secure areas shall be properly designed and applied (Peltier, 2016). Secure working areas must have physical protection. Possible measures include facilitating the personnel with the knowledge of the operations inside a secure facility on a need-to-know basis; the operations in secure sectors should be supervised and vacant secure areas should be locked as well as checked periodically (Peltier, 2016). Only those personnel who have legitimate authorisation should be permitted access to places that stores sensitive information such as a computer room or a data storage room (Peltier, 2016). The computerised operations should be kept by the organisation in a secure area with access restriction. Unauthorised personnel must be denied access by the organisation with the use of restricted areas, security rooms and locked doors (Peltier, 2016). Restricted areas may be created for the protection of sensitive and confidential information in open areas during working shifts. The restricted areas must be clearly marked and any unauthorised access to these locations must be promptly challenged by the personnel who are working in the restricted area (Peltier, 2016). The organisation must limit controlled area access to only authorised personnel during criminal justice information processing times (Peltier, 2016). The organisations personnel policies should be obtained and reviewed to analyse and assess the methods and controls over recruiting new employees (Peltier, 2016). A procedure must be implemented that specifies about who can authorise personnel work in locations where ePHI might be accessed (Peltier, 2016). The trash bins that are used in an organisation can transform to a valuable information centre for industrial spies seeking company secrets to utilise them for benefits. Industrial competitors are always on the look-out for sensitive information of their rivals, which may drop in their lap in the form of a crumpled paper thrown away carelessly by an employee of the rival company into a trash bin that is rummaged and retrieved by an industrial spy in disguise of a sweeper (Benny, 2013). There are many instances of such incidents where an employee of an organisation threw away a paper containing sensitive company data carelessly into the wastebasket of the office that caused heavy loss to the company. Due to recent security conditions, every organisation is spending a fortune in cranking up the security measures. However, all the efforts are going to trash due to such insensitive acts. There is however, some procedure that can protect a trash bin of one company from becoming the winnin g trophy of its rival (Bhatti Alymenko, 2017). Paper Shredders are machines that can be used to destroy company documents completely that are not needed by the company instead of discarding them to the bin.This is a very effective process that will protect a company from losing any of its documents to the world even if it is not required by the company. There are some organisations like the Data Destruction Services Inc. in Boston run by Dick Hannon, who takes orders from organisations who wishes to destroy some of their documents and deploy vans equipped with paper shredder machines at their location to destroy the documents management (Bhatti Alymenko, 2017). To reduce the risk of desktop PC theft, individual desktop PCs in organisations can be securely locked onto their desktops with a cable, if there is something on the desk to wrap the cable around. In addition, each PC should have a login screen that requires a complex passcode and a screensaver so that an intruder can be restricted from unauthorised access and use to the system (Bhatti Alymenko, 2017). Intrusion Detection System of a firewall uses two types of filtering namely Deep Packet Inspection and Packet Stream Analysis. Deep Packet Inspection abbreviated as DPI is a method of packet filtering whose zone of operation is in the application layer of the Open Systems Interconnection (OSI) reference model. DPI makes it possible to locate, recognise, differentiate, reroute or block packets with particular code or data that typical packet filtering tools fail to identify as they are tasked to examine only packet headers. Communication Service providers use DPI to allocate the resources that are available to them to achieve a streamlined flow of traffic (Antonello et al., 2012). An example can be cited in support of the topic discussed above. A message is tagged as high priority that can be routed to its destination before the pending less important or low priority data packets that are involved in ordinary internet browsing. DPI can be utilised further for speeding up transfer of data to avoid P2P abuse, enhancing the experience of most subscribers by improving the performance of the network. Many security implications are there of using DPI as the technology enables to recognise the original source as well as the recipient of content that contains specific packets (Antonello et al., 2012). A packet stream are basically fragmented parts of a message or data that is broken down into small bits, each of which contains part of the complete information that is present in the whole packet. The packet is broken down in such a way that at the receiving end it can be reassembled to its original form and the information within the packet is not damaged (Sanders, 2017). The process of analysing a packet stream within a network for possible malicious contents or detection of intrusion is called packet stream analysis. It is a basic method of monitoring a network. Many network packet analyser-tools are there that monitor all the incoming and outgoing packets in a network link and capture these packets to inspect for any malicious traffic. These tools also provide reconstruction of session that helps to understand the exact problem of the network. Apart from the packet header, most of the network data is binary trash. A large amount of processing power is required to extract the true meaningful information out of a network session. It multiplies to an exponential rate if the data is encrypted. More analysis requires implementation of hardware, which in turn requires higher expense of cost. That is why a true network analysis is processing intensive and tough on the analyst (Sanders, 2017). References Agrawal, M., Mishra, P. (2012). A comparative survey on symmetric key encryption techniques. International Journal on Computer Science and Engineering, 4(5), 877. Antonello, R., Fernandes, S., Kamienski, C., Sadok, D., Kelner, J., GDor, I., ... Westholm, T. (2012). Deep packet inspection tools and techniques in commodity platforms: Challenges and trends. Journal of Network and Computer Applications, 35(6), 1863-1878. Benny, D. J. (2013). Industrial espionage: Developing a counterespionage program. Crc Press. Bhatti, H. J., Alymenko, A. (2017). A Literature Review: Industrial Espionage. Durcekova, V., Schwartz, L., Shahmehri, N. (2012, May). Sophisticated denial of service attacks aimed at application layer. In ELEKTRO, 2012 (pp. 55-60). IEEE. Fachkha, C., Bou-Harb, E., Boukhtouta, A., Dinh, S., Iqbal, F., Debbabi, M. (2012, October). Investigating the dark cyberspace: Profiling, threat-based analysis and correlation. In Risk and Security of Internet and Systems (CRiSIS), 2012 7th International Conference on (pp. 1-8). IEEE. Gunasekhar, T., Rao, K. T., Saikiran, P., Lakshmi, P. S. (2014). A survey on denial of service attacks. Hofheinz, D., Jager, T. (2012, August). Tightly Secure Signatures and Public-Key Encryption. In Crypto (Vol. 7417, pp. 590-607). Hsu, S. T., Yang, C. C., Hwang, M. S. (2013). A Study of Public Key Encryption with Keyword Search. IJ Network Security, 15(2), 71-79. Jeeva, A. L., Palanisamy, D. V., Kanagaram, K. (2012). Comparative analysis of performance efficiency and security measures of some encryption algorithms. International Journal of Engineering Research and Applications (IJERA), 2(3), 3033-3037. Malekzadeh, M., Ghani, A. A. A., Subramaniam, S. (2012). A new security model to prevent denial?of?service attacks and violation of availability in wireless networks. International Journal of Communication Systems, 25(7), 903-925. Park, P., Yoo, S., Choi, S. I., Park, J., Ryu, H. Y., Ryou, J. (2012). A Pseudo State-Based Distributed DoS Detection Mechanism Using Dynamic Hashing. In Computer Applications for Security, Control and System Engineering (pp. 22-29). Springer, Berlin, Heidelberg. Peltier, T. R. (2016). Information Security Policies, Procedures, and Standards: guidelines for effective information security management. CRC Press. Raiyn, J. (2014). A survey of cyber attack detection strategies. International Journal of Security and Its Applications, 8(1), 247-256. Sanders, C. (2017). Practical packet analysis: Using Wireshark to solve real-world network problems. No Starch Press. Stallings, W., Tahiliani, M. P. (2014). Cryptography and network security: principles and practice (Vol. 6). London: Pearson. Surya, E., Diviya, C. (2012). A Survey on Symmetric Key Encryption Algorithms. International Journal of Computer Science Communication Networks, 2(4), 475-477. Tartakovsky, A. G., Polunchenko, A. S., Sokolov, G. (2013). Efficient computer network anomaly detection by changepoint detection methods. IEEE Journal of Selected Topics in Signal Processing, 7(1), 4-11. Tartakovsky, A., Nikiforov, I., Basseville, M. (2014). Sequential analysis: Hypothesis testing and changepoint detection. CRC Press. Wee, H. (2012, May). Public Key Encryption against Related Key Attacks. In Public Key Cryptography (Vol. 7293, pp. 262-279).

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.