A Closer Look at Integration Testing

https://www.qualitylogic.com/2018/01/11/closer-look-integration-testing/

Integration testing is the process of assembling various code modules to preform very specific tasks in a software system that uses them in a more generalized application. The article gives the example of using a module that stores and retreives files with another that displays their names in a list and another that reads keystrokes as well as one that that recognizes mouse clicks, all together it is a file manaement application. Each module is usually created and tested seperately using unit tests before being added to the code control system. After in the management system they are subjected to integration tests which tests them in conjunction with other modules. This usually uncovers issues with data formats, operation timing, API calls, database access and user interface operation.

There are four parts of integration testing that are critical. The first one is software integration which means in part that everything fuctions the way the user commands. For example, with interface operations it is important that when a user does something unexpected by the systems design the application does not simply crash but produces an error message. Next, the modules must continually exchange data as the overall system functions. This is often performed by a data or command bus that facilitates module communication. These buses are often the most stressed component of the system and integration tests should test them under various amounts of load to ensure that the individual modules are working together properly. Third are the API calls which are the most common channel by which the system communicates with third party services and has become a tool for connecting modules together within the system. This allows data and commands to be separated into different function calls making the exchange easier to debug. One of the main issues with API calls for integration testing is that each one needs to be tested separately, through the entire set of data and inside and outside the acceptable range. Then API calls must be tested in functional groups. Finally there is End-To-End testing which is a big priority for integration testing. The test sets up a system called a sandbox environment and then creates tests for every use the application would go through in real world use. User inputs simulate the front end with data and requests which are then communicated to the middleware to work with the backend of API calls to third parties and database manipulations. All of this is carefully monitored for accuracy and run time.

 

Advertisements

Defending against Spectre and Meltdown attacks

http://news.mit.edu/2018/mit-csail-dawg-better-security-against-spectre-meltdown-attacks-1018

In January the security vulnerabilities Meltdown and Spectre were discovered. These vulnerabilities were born not from the usually way of software or physical CPU problems but from the architecture of the CPU itself. This means that large amounts of people, buisnessess and more were vulnerable.  With this new method of defense it is much harder for hackers to get away with such attacks. This method of defense may also have an immediate impact on fields like medicine and finance who limit their use of cloud computing due to security concerns. With Meltdown and Spectre, the attackers took advantage of the fact that operations can take different times to compute. For example, someone trying to brute force a password will look at how long it takes for a wrong password to compute and then compare it to another entry and see if it takes longer. If it does then something in the entry that took longer will have a correct number or letter. The normal defense to this attack is Cache Allocation Technology (CAT), which splits up memory so that it is not stored all in one area. Unfortunately this method is still quite insecure because things are still visible to all partitions. This new approach is a form of secure way partitioning called Dynamically Allocated Way Guard (DAWG). Since it is dynamic it can split the cache and then change the size of those different pieces over time. DAWG is able to fully isolate one program from another through the cache and still has comparable performace to CAT. It is able to establish clear boundaries for programs so that when sharing should not happen it does not, this is helpful for programs with sensitive information.

The article mentions that these microarchitectural attaks are becoming more common because other methods of attack have become more difficult. I thought that was interesting because it seems like a relatively new method and a new security risk that has not had time to receive development for security. This is an issue that can effect anyone and is a serious problem. On top of that, performance is a big concern with this security since is deals directly with the CPU and its architecture which is not an easy fix. The article also points out that because of these attacks, more information sharing between applications is not always a good thing. I find this pretty interesting since a large number of different applications made by the same company now have information sharing capabilities such as the microsoft umbrella of software. Sharing information between things can actually put you at more of a risk than it is worth saving time by sharing things.

Detecting fake news at its source

http://news.mit.edu/2018/mit-csail-machine-learning-system-detects-fake-news-from-source-1004

This is an interesting article about how researchers are trying to make a program that will decide wether or not a news soucre is reliable. This program uses machine learning and scrapes data about a site to makes its determination and it only needs about 150 articles to reliably detect if a source is accurate or not. Researchers first took data from mediabiasfactcheck.com, a site that has human fact checkers that analyze the accuracy of over 2,000 news sites. They then fed that data into their algorithm to teach it how to classify news sites into high, medium or low levels of factuality. As of now, the system was 65 percent accurate at detecting these levels of factuality and 70 percent accurate at deciding if a source is left-leaning, right-leaning or moderate. The researchers determined that the best way to detect fake news were to look at the language used in a sources stories. Fake news sources were likely to use language that is hyperbolic, subjective and emotional. This system was also able to read wikipedia pages on sources that were fake news and noticed that those wikipidia pages contained an abnormal amount of words like extreme or conspiracy theory, even making correlations with the strcuture of a sources URL, sources with lots of special characters and complicated subdirectories were associated with being less reliable.

I think this is noteworthy because of how important accurate information is on the internet. There are a large number of people that spread misinformation on social media and influence the behaviors and thoughts of their readers. There are constantly news stories on bots from Russia or other countries trying to spread misinformation to not only the United States but anyone they consider a threat. The spreading of this fake news can cause a lot of problems and pose a serious issue for the information age to the point that I have heard people refer to our current time as the “misinformation age”. If automated software is able to detect fake news at a near perfect accuracy it will help the entire planet in combating such a large and seemingly unbeatable problem. Something that takes hundreds of fact checkers could take a program a few minutes and it could be accesible to the general public for all of thier needs. Although the algorithm in its current for is not accurate enough, it is a step forward and a look at a possible solution that we desperately need.

Model helps robots navigate more like humans do

http://news.mit.edu/2018/model-helps-robots-navigate-like-humans-1004

A paper describing a model where researchers combined a planning algorithm with a neural network that learns to recognize paths that lead to the best outcome, then use that knowledge to guide a robot through an enviroment.  The researchers demonstrate their model in two settings, navigating through rooms that have traps and narrow passageways or navigating through a room without any collisions. This learns by being shown a few examples of similar eviroments and then bases its actions on that. For example, if it recognizes a door it will know to exit through it based on the learned examples all exiting through a door. This model combines older, more common methods with this new look at machine learning. The planner creates a search tree while the neural network mirrors each step and makes a prediction based on probabilities for possible actions to take. If the network has high confidence of success it will act on it, otherwise it will fall back on exploring the enviroment like the tradititional method. One application for this model is autonomous cars where there are multiple agents all operating at the same time in the same space. For autonomous cars intersections and especially roundabouts are extremly challenging since there are numerous cars moving around a circle going in and out all at once. Results indicate that this model of machine learning can learn enough behavior based on previous experience to be able to navigate something as that challenging. In addition, they only needed a few examples of very few cars in a roundabout to be successful.

I find this article to be interesting since autonomous cars are going to be extremely popular in the future. This model is another advancement toward getting self driving cars even more independent to the point of something from a sci-fi movie. In addition to that, having something to efficiently navigate a room is important for other applications such as assisting somone that is blind. Something like this could give someone who has to rely someone else more independance if all they need to do is wear some glasses that tell them where objects in the room are located and how to navigate around them.

Machine-learning system tackles speech and object recognition, all at once

http://news.mit.edu/machine-learning-image-object-recognition-0918

Speech recognition systems usually require hundreds of thousands of transcripts in order to work properly. This new model of machine learning learns through audio visual associations similar to how a child would learn, where they correlate speech with related images. The researchers then modified the model to associate specific words with specific pixels. It works by dividing the image into a grid of cells consisting of patches of pixels while dividing the audio portion into segments of the spectrogram. It then compares each image cell to each audio segment and produces a similarity score for each individual one. The researches call this comparison method a “matchmap”. One good use of this is learning translations between all of the languages on the planet. There are an estimated total of 7,000 languages spoken wordwide and only about 100 have trascription data for speech recognition. With this model, two different language speakers can describe the same image and the machine can learn the speech signals of the two languages and match the words, making them translations of one another. This is interesting because that means the model does not require actual text to learn to translate. In languages where things are not commonly written down, the machine can translate meanings where other methods that are common today cannot.

This method is important to note because machine learning is a growing topic in the world of computer science and this could open up all kinds of possibilities. It is a new and innovative way to try to solve a problem and might become something needed for future jobs or projects in life. With this matchmap system, speech recognition no longer needs to be manually taught hundreds of thousands of transcriptions and examples of those transcriptions in order to function properly. This is increasingly important since new words enter our dictionary and become common for people over time. Currently, the machine can only recognize a few hundred words but in the future it could help advance the machine learning field while also improving the speech recognition software that exists such as siri.