Monday, 19 May 2014

Agile Testing

The basic principles that form the basis for agile testing are Communication, Simplicity,
feedback and iterations. It aims to fulfill customer requirement timely manner. The customers
are considered the part of the project. There should be close relation between developer and
tester. The testers help each other in finding the quick solution. In this approach simple function
is taken first and then extra functionalities are added. The agile approach use feedback at every
step from customer. To perform Agile testing agile approach to the development is mandatory.
Basically in agile approach the entire software is divided in small modules and then identifies
the priorities of the module depending on user requirement. The number of iteration should be

reduced.

Advantages and Disadvantage of Whitebox Testing

Advantages of white box testing

·         Forces test developer to reason carefully about implementation.
·         Approximates the partitioning done by execution equivalence
·         Reveals errors in "hidden" code:
·         Beneficent side-effects
·         Optimizations (e.g. chartable that changes reps when size > 100)
·         As the knowledge of internal coding structure is prerequisite, it becomes very easy to find out which type of input/data can help in testing the application effectively.
·         The other advantage of white box testing is that it helps in optimizing the code
·         It helps in removing the extra lines of code, which can bring in hidden defects.

Disadvantages of white box testing

·        Expensive
·         Miss cases omitted in the code
·         As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out this type of testing, which increases the cost.
·         And it is nearly impossible to look into every bit of code to find out hidden errors, which may create problems, resulting in failure of the application
·         Not looking at the code in a runtime environment. That's important for a number of reasons. Exploitation of vulnerability is dependent upon all aspects of the platform being targeted and source code is just of those components. The underlying operating system, the backend database being used, third party security tools, dependent libraries, etc. must all be taken into account when determining exploitability. A source code review is not able to take these factors into account
·         Very few white-box tests can be done without modifying the program, changing values to force different execution paths, or to generate a full range of inputs to test a particular function


World Wide Interoperability for Microwave Access (WiMAX )



WiMAX is a wireless technology put forth by the WiMAX Forum that is one of the technologies that is being used for 4G networks. It can be used in both point to point and the typical WAN type configurations that are also used by 2G and 3G mobile network carriers. Its formal name is IEEE standard 802.16. Sprint owns a WiMAX based network that is marketed under the name XOHM, though that will eventually be merged with Clearwire's network and sold under the Clearwire name. LTE is a competing technology that has the support of far more carriers worldwide.


Sunday, 18 May 2014

CT scan image reconstruction



At the core of any CT scan image reconstruction
is a computer algorithm called Filtered Back projection (FBP). Each of the hundreds of x-ray image data sets obtained by the CT scanner is filtered to prepare them for the back projection step. Back projection is nothing more than adding each filtered x-ray image data set’s contribution into each pixel of the final image
reconstruction. Each x-ray view data set consists of hundreds of floating point numbers, and there are hundreds of these data sets. In a high-resolution image, there are millions to tens of millions of pixels. It is easy to see why summing hundreds of large data sets into millions of pixels is a very time-intensive operation which only gets worse as the image resolution increases.

     Image reconstruction implementation

SRC’s IMPLICIT+EXPLICIT™ Architecture is well suited to accelerating CT scan image reconstruction. In the simplest SRC-7 system implementation, a microprocessor is paired with a Series H MAP® processor. The system microprocessor provides data input and displays the final image using a commodity graphics card. The MAP processor contains an instantiation of the FBP algorithm. These two processors working together achieve a 29x performance boost over the 3.0 gigahertz 64-bit Xeon microprocessor working alone.
 

Saturday, 17 May 2014

Static Routing

Advantages of Static Routing:

Static routing has some enormous advantages over dynamic routing. Chief among these advantages is predictability. Because the network administrator computes the routing table in advance, the path a packet takes between two destinations is always known precisely, and can be controlled exactly. With dynamic routing, the path taken depends on which devices and links are functioning, and how the routers have interpreted the updates from other routers.

Additionally, because no dynamic routing protocol is needed, static routing doesn't impose any overhead on the routers or the network links. While this overhead may be minimal on an FDDI ring, or even on an Ethernet segment, it could be a significant portion of network bandwidth on a low-speed dial-up link. Consider a network with 200 network segments. Every 30 seconds, as required by the RIP specification, the routers all send an update containing reachability information for all 200 of these segments. With each route taking 16 octets of space, plus a small amount of overhead, the minimum size for an update in this network is over three kilobytes. Each router must therefore send a 3 Kb update on each of its interfaces every 30 seconds. As you can see, for a large network, the bandwidth devoted to routing updates can add up quickly.

Disadvantages of Static Routing:

While static routing has advantages over dynamic routing, it is not without its disadvantages. The price of its simplicity is a lack of scalability. For five network segments on three routers, computing an appropriate route from every router to every destination is not difficult. However, many networks are much larger. Consider what the routing might look like for a network with 200 network segments interconnected by more than a dozen routers. To implement static routing, you would need to compute the next hop for each network segment for each router, or more than 2,400 routes! As you can see, the task of precomputing routing tables quickly becomes a burden, and is prone to errors.

Of course, you could argue that this computation need only occur once, when the network is first built. But what happens when a network segment moves, or is added? While the computation may be relatively easy, to implement the change, you would have to update the configuration for every router on the network. If you miss one, in the best case, segments attached to that router will be unable to reach the moved or added segment. In the worst case, you'll create a routing loop that affects many routers.

Difference B/W Ad hoc Testing and Regression Testing:

Ad hoc testing is a commonly used term for software testing performed without planning and documentation (but can be applied to early scientific experimental studies).
The tests are intended to be run only once, unless a defect is discovered. Ad hoc testing is the least formal test method. As such, it has been criticized because it is not structured and hence defects found using this method may be harder to reproduce (since there are no written test cases). However, the strength of ad hoc testing is that important defects can be found quickly.
It is performed by improvisation: the tester seeks to find bugs by any means that seem appropriate. Ad hoc testing can be seen as a light version of error guessing, which itself is a light version of exploratory testing.

Regression testing is a type of software testing that seeks to uncover new software bugs, or regressions, in existing functional and non-
Functional areas of a system after changes such as enhancements
, patches or configuration changes, have been made to them.
The intent of regression testing is to ensure that a change such as those mentioned above has not introduced new faults. One of the main reasons for regression testing is to determine whether a change in one part of the software affects other parts of the software.
Common methods of regression testing include rerunning previously completed tests and checking whether program behavior has changed and whether previously fixed faults have re-emerged. Regression testing can be performed to test a system efficiently by systematically selecting the appropriate minimum set of tests needed to adequately cover a particular change.

Thursday, 15 May 2014

Image extraction and preprocessing


We use off-the-shelf tools to extract images from the embedding documents. For instance, images in PDF documents can be extracted by Adobe Acrobat image extraction tools. Images contained within HTML document can be extracted by special HTML parsers. Images extracted from PDF are usually in “PNG” format. Web images are typically in GIF format. Based on our observations, the majority of images extracted from PDF documents are stored in raster format and may also contain color information. Typically, humans do not need to see the images in full color in order to determine the class label of an image, though full color certainly helps in understanding the meanings of the images. Thus, we convert all images to gray scale format in order to standardize the input format of our system. Specifically, we convert all images to the Portable GrayMap (PGM) format, a gray scale
image format which is easy to manipulate.


Extracting text and numerical data from 2-D plots
Two-dimensional (2-D) plots represent a quantitative relationship between a dependent variable and an independent variable. Extracting data from 2-D plots and converting them to a machine-processible form will enable users to analyze the data and compare them with other data. Extracting the metadata related to 2-D plots will enable retrieval of plots and corresponding documents and will help in the interpretation of the data. We developed a system for extracting metadata from single-part 2-D plot images, i.e., a single 2-D plot in the 2-D plot image.


Extracting line features

A part feature refers to a part of an image with some special properties, e.g., a circle or a line. Based on our definitions of several non-photographic image classes and our experimental data, we observed correlations of certain objects with corresponding image classes. For example, a two dimensional coordinate system, consisting of two axes, are
commonly seen in 2-D plots; rectangles, ovals and diamonds are common objects in diagrams. Thus, we attempt to design part image features for basic objects in non-photographic images and use them to discriminate different classes of
non-photographic images.

Wednesday, 14 May 2014

Hybrid Routing

In a hybrid routing scheme, some parts of the network use static routing, and some parts use dynamic routing. Which parts use static or dynamic routing is not important, and many options are possible. One of the most common hybrid schemes is to use static routing on the fringes of the network (what I have called the access networks) and to use dynamic routing in the core and distribution networks. The advantage of using static routing in the access networks is that these networks are where your user machines are typically located; these machines often have little or no support for dynamic routing. Additionally, access networks often have only one or two router attachments, so the burden of configuring static routing is limited. It may even be possible to define nothing more than a default route on these stub networks. Because of the limited connections to these networks, you usually don't need to reconfigure routing on a stub network when it gets moved to a new place in the network.

On the other hand, distribution and core networks often have many router connections, and therefore many different routes to maintain. Therefore, routers in these components of the network usually can't get by with a default route. Routers (and hosts) in the central parts of the network need complete routing information for the entire network. Furthermore, routers in the core and distribution networks usually need to be informed of changes in the connectivity of access networks. While it is certainly possible to inform each router manually when an change occurs, it is usually easier and more practical to allow a dynamic routing protocol to propagate the changes.

Tuesday, 13 May 2014

Automated analysis of images in documents for intelligent document search


We use images to present a wide variety of important information in documents. For example, two-dimensional (2-D) plots display important data in scientific publications. Often, end-users seek to extract this data and convert it into a machine-processible form so that the data can be analyzed automatically or compared with other existing data. Existing document data extraction tools are semi-automatic and require users to provide metadata and interactively extract the data.

 Image classification

Automatic image classification is often an important step in content-based image retrieval and annotation. Prior efforts model the retrieval and annotation problems as automatic classification of images into classes corresponding to semantic concepts. Visual features and modeling techniques have attracted significant attention. Textural features, color features, edge features, or combinations of these features have been developed for classifying images. Chapelle et al, used support vector  machines to improve the histogram-based classification of images. Li et al. utilized context information of image blocks, i.e., statistics about neighboring blocks, and modeled images using two-dimensional hidden Markov models to classify images.Maree et al. proposed a generic image classification approach by extracting subwindows randomly and using supervised learning. Yang et al.  designed a method to learn the correspondence between image regions and keywords through Multiple-Instance Learning (MIL).

Image analysis

Recognition and interpretation of graphics, such as engineering drawings, maps, schematic diagrams, and organization charts, are important steps for processing mostly graphics document images. Yu et al. developed an engineering drawing understanding system for processing a variety of drawings. The system combines domain-independent algorithms, including segmentation and symbol classification algorithms, and domain-specific knowledge, for example a symbol library, in the processing of graphics. Okazaki et al. proposed a loop-structure-based two-phase symbol recognition
method for reading logic circuit diagrams. Blostein et al. summarized various approaches to diagram recognition. Futrelle et aldeveloped a system to extract and classify vector format diagrams in PDF documents. Shao et al. designed a method for recognition and classification of figures in vector-based PDF documents.

Thursday, 8 May 2014

WiMAX


 WiMAX stands for WorldWide Interoperability for Microwave Access and is technically referred to by the IEEE as 802.16. WiMAX is also commonly termed 4G network. It is a wireless wide area network (WAN) that can cover what DSL lines can cover, but without wires. It can give Internet connectivity to computers in the way GSM has given phone connectivity to mobile phones and made them replace fixed landline phones.



Digital Image processing technique for blood Glucose measurements



Scalable algorithms must be developed using parallel techniques to reduce processing time and increase memory efficiency. If the data amount exceeds the memory of the CPU or GPU, several techniques can be employed, including compressed or packed representations of the data, decomposition techniques, multi-resolution schemes, or out-of-core techniques . Recent research combined bricking and decomposition with a hierarchical data structure.
           
            Different programming steps are used for the data management:
(i)                 decomposition techniques to reach a multi-resolution subdivision of the data,

             (ii) streaming techniques to asynchronously reach the right viewing data, and   
        (iii) algorithms to render the volume visualization or to visualize the zoomed data. 
The main disadvantage of working with Giga- to Terabyte volume data is the runtime performance. Current research is focused on advanced parallelization techniques in order to reach an acceptable real-time response. These techniques require different hardware architectures. Several programming languages have been developed to support such architectures:

1.      Parallel CPU-based programming on a single node with shared memory using threaded programming techniques like OpenMP or QtThreaded.

2.       Parallel GPU-based programming on a single node with one GPU or multiple GPUs using programming languages for the massive parallel cores on the graphic card. With advances in GPU architecture, several algorithms have reached higher efficiency by transferring the program from CPU to GPU. This means instead of four to eight parallel CPUs, 240 to 480 massively parallel processing cores on the graphic card are used. Several languages have been developed by the graphic cards industry to code algorithms for execution on the GPU.

3.      Parallel programming on multiple nodes in a cluster of linked computers connected through a fast local area network (LAN), which is also referred to as Grid computing . Special software interfaces manage the communication between the processes, like the message passing interface (MPI).