Monday, 19 May 2014

Agile Testing

The basic principles that form the basis for agile testing are Communication, Simplicity,
feedback and iterations. It aims to fulfill customer requirement timely manner. The customers
are considered the part of the project. There should be close relation between developer and
tester. The testers help each other in finding the quick solution. In this approach simple function
is taken first and then extra functionalities are added. The agile approach use feedback at every
step from customer. To perform Agile testing agile approach to the development is mandatory.
Basically in agile approach the entire software is divided in small modules and then identifies
the priorities of the module depending on user requirement. The number of iteration should be

reduced.

Advantages and Disadvantage of Whitebox Testing

Advantages of white box testing

·         Forces test developer to reason carefully about implementation.
·         Approximates the partitioning done by execution equivalence
·         Reveals errors in "hidden" code:
·         Beneficent side-effects
·         Optimizations (e.g. chartable that changes reps when size > 100)
·         As the knowledge of internal coding structure is prerequisite, it becomes very easy to find out which type of input/data can help in testing the application effectively.
·         The other advantage of white box testing is that it helps in optimizing the code
·         It helps in removing the extra lines of code, which can bring in hidden defects.

Disadvantages of white box testing

·        Expensive
·         Miss cases omitted in the code
·         As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out this type of testing, which increases the cost.
·         And it is nearly impossible to look into every bit of code to find out hidden errors, which may create problems, resulting in failure of the application
·         Not looking at the code in a runtime environment. That's important for a number of reasons. Exploitation of vulnerability is dependent upon all aspects of the platform being targeted and source code is just of those components. The underlying operating system, the backend database being used, third party security tools, dependent libraries, etc. must all be taken into account when determining exploitability. A source code review is not able to take these factors into account
·         Very few white-box tests can be done without modifying the program, changing values to force different execution paths, or to generate a full range of inputs to test a particular function


World Wide Interoperability for Microwave Access (WiMAX )



WiMAX is a wireless technology put forth by the WiMAX Forum that is one of the technologies that is being used for 4G networks. It can be used in both point to point and the typical WAN type configurations that are also used by 2G and 3G mobile network carriers. Its formal name is IEEE standard 802.16. Sprint owns a WiMAX based network that is marketed under the name XOHM, though that will eventually be merged with Clearwire's network and sold under the Clearwire name. LTE is a competing technology that has the support of far more carriers worldwide.


Sunday, 18 May 2014

CT scan image reconstruction



At the core of any CT scan image reconstruction
is a computer algorithm called Filtered Back projection (FBP). Each of the hundreds of x-ray image data sets obtained by the CT scanner is filtered to prepare them for the back projection step. Back projection is nothing more than adding each filtered x-ray image data set’s contribution into each pixel of the final image
reconstruction. Each x-ray view data set consists of hundreds of floating point numbers, and there are hundreds of these data sets. In a high-resolution image, there are millions to tens of millions of pixels. It is easy to see why summing hundreds of large data sets into millions of pixels is a very time-intensive operation which only gets worse as the image resolution increases.

     Image reconstruction implementation

SRC’s IMPLICIT+EXPLICIT™ Architecture is well suited to accelerating CT scan image reconstruction. In the simplest SRC-7 system implementation, a microprocessor is paired with a Series H MAP® processor. The system microprocessor provides data input and displays the final image using a commodity graphics card. The MAP processor contains an instantiation of the FBP algorithm. These two processors working together achieve a 29x performance boost over the 3.0 gigahertz 64-bit Xeon microprocessor working alone.
 

Saturday, 17 May 2014

Static Routing

Advantages of Static Routing:

Static routing has some enormous advantages over dynamic routing. Chief among these advantages is predictability. Because the network administrator computes the routing table in advance, the path a packet takes between two destinations is always known precisely, and can be controlled exactly. With dynamic routing, the path taken depends on which devices and links are functioning, and how the routers have interpreted the updates from other routers.

Additionally, because no dynamic routing protocol is needed, static routing doesn't impose any overhead on the routers or the network links. While this overhead may be minimal on an FDDI ring, or even on an Ethernet segment, it could be a significant portion of network bandwidth on a low-speed dial-up link. Consider a network with 200 network segments. Every 30 seconds, as required by the RIP specification, the routers all send an update containing reachability information for all 200 of these segments. With each route taking 16 octets of space, plus a small amount of overhead, the minimum size for an update in this network is over three kilobytes. Each router must therefore send a 3 Kb update on each of its interfaces every 30 seconds. As you can see, for a large network, the bandwidth devoted to routing updates can add up quickly.

Disadvantages of Static Routing:

While static routing has advantages over dynamic routing, it is not without its disadvantages. The price of its simplicity is a lack of scalability. For five network segments on three routers, computing an appropriate route from every router to every destination is not difficult. However, many networks are much larger. Consider what the routing might look like for a network with 200 network segments interconnected by more than a dozen routers. To implement static routing, you would need to compute the next hop for each network segment for each router, or more than 2,400 routes! As you can see, the task of precomputing routing tables quickly becomes a burden, and is prone to errors.

Of course, you could argue that this computation need only occur once, when the network is first built. But what happens when a network segment moves, or is added? While the computation may be relatively easy, to implement the change, you would have to update the configuration for every router on the network. If you miss one, in the best case, segments attached to that router will be unable to reach the moved or added segment. In the worst case, you'll create a routing loop that affects many routers.

Difference B/W Ad hoc Testing and Regression Testing:

Ad hoc testing is a commonly used term for software testing performed without planning and documentation (but can be applied to early scientific experimental studies).
The tests are intended to be run only once, unless a defect is discovered. Ad hoc testing is the least formal test method. As such, it has been criticized because it is not structured and hence defects found using this method may be harder to reproduce (since there are no written test cases). However, the strength of ad hoc testing is that important defects can be found quickly.
It is performed by improvisation: the tester seeks to find bugs by any means that seem appropriate. Ad hoc testing can be seen as a light version of error guessing, which itself is a light version of exploratory testing.

Regression testing is a type of software testing that seeks to uncover new software bugs, or regressions, in existing functional and non-
Functional areas of a system after changes such as enhancements
, patches or configuration changes, have been made to them.
The intent of regression testing is to ensure that a change such as those mentioned above has not introduced new faults. One of the main reasons for regression testing is to determine whether a change in one part of the software affects other parts of the software.
Common methods of regression testing include rerunning previously completed tests and checking whether program behavior has changed and whether previously fixed faults have re-emerged. Regression testing can be performed to test a system efficiently by systematically selecting the appropriate minimum set of tests needed to adequately cover a particular change.

Thursday, 15 May 2014

Image extraction and preprocessing


We use off-the-shelf tools to extract images from the embedding documents. For instance, images in PDF documents can be extracted by Adobe Acrobat image extraction tools. Images contained within HTML document can be extracted by special HTML parsers. Images extracted from PDF are usually in “PNG” format. Web images are typically in GIF format. Based on our observations, the majority of images extracted from PDF documents are stored in raster format and may also contain color information. Typically, humans do not need to see the images in full color in order to determine the class label of an image, though full color certainly helps in understanding the meanings of the images. Thus, we convert all images to gray scale format in order to standardize the input format of our system. Specifically, we convert all images to the Portable GrayMap (PGM) format, a gray scale
image format which is easy to manipulate.


Extracting text and numerical data from 2-D plots
Two-dimensional (2-D) plots represent a quantitative relationship between a dependent variable and an independent variable. Extracting data from 2-D plots and converting them to a machine-processible form will enable users to analyze the data and compare them with other data. Extracting the metadata related to 2-D plots will enable retrieval of plots and corresponding documents and will help in the interpretation of the data. We developed a system for extracting metadata from single-part 2-D plot images, i.e., a single 2-D plot in the 2-D plot image.


Extracting line features

A part feature refers to a part of an image with some special properties, e.g., a circle or a line. Based on our definitions of several non-photographic image classes and our experimental data, we observed correlations of certain objects with corresponding image classes. For example, a two dimensional coordinate system, consisting of two axes, are
commonly seen in 2-D plots; rectangles, ovals and diamonds are common objects in diagrams. Thus, we attempt to design part image features for basic objects in non-photographic images and use them to discriminate different classes of
non-photographic images.