Menu
THE PROBLEM OF SCIENTIFIC RESEARCH EFFECTIVENESS
By: Alexander F. Kurgaev, Alexander V. Palagin  (5149 reads)
Rating: (1.00/10)

Abstract: The paper enlightens the following aspects of the problem of scientific research effectiveness: it formulates the main problem of growth of the scientific research efficiency; reveals the most essential attributes of scientific knowledge limiting the area of optimum existence for professional scientists’ work efficiency; reveals the hierarchy of problem situations on a way to growth of scientists’ work efficiency; defines and grounds the solution of the above-mentioned problem situations. As well the given paper investigates efficiency of the chosen way.

Keywords: scientific researches, the canonical form of knowledge, integral knowledge, cognition, knowledge processing system.

ACM Classification Keywords: A.0 General Literature; J.4 Social and Behavioral Sciences; М.4 Intelligence Metasynthesis and Knowledge Processing in Intelligent Systems

Link:

THE PROBLEM OF SCIENTIFIC RESEARCH EFFECTIVENESS

Alexander F. Kurgaev, Alexander V. Palagin

http://foibg.com/ijita/vol17/ijita17-1-p10.pdf

MODELING OF TRANSCUTANEOUS ENERGY TRANSFER SYSTEM FOR AN IMPLANTABLE ...
By: Liu et al.  (3967 reads)
Rating: (1.00/10)

Abstract: This study models a transcutaneous energy transmission system which can supply DC power to an implanted device without an external battery. The goals of the study are to: (1) develop a model to describe the transcutaneous energy transmission system; and (2) use the developed model to design a transcutaneous energy transmission system for an implantable gastrointestinal neurostimulator. The complete trancutaneous energy system includes a power amplifier, a highly inductive coupling structure, and an ac-to-dc rectifying circuit in the receiver. Power amplification is based on the single-ended class E amplifier concept. The power amplification stage is self-oscillating, and the oscillation frequency is influenced by the coupling of the coils. The highly inductive coupling structure employs the stage tuning concept. Design methods and detailed analysis are provided. The proposed model is verified through the implementation of the design.

Keywords: computer modeling, neurostimulation, gastrointestinal disorders

Link:

MODELING OF TRANSCUTANEOUS ENERGY TRANSFER SYSTEM FOR AN IMPLANTABLE GASTROINTESTINAL STIMULATION DEVICE

Joanna Liu C. Wu, Martin P. Mintchev

http://foibg.com/ijita/vol17/ijita17-1-p09.pdf

DECREASING VOLUME OF FACE IMAGES DATABASE AND EFFICIENT FACE DETECTION ...
By: Grigor A. Poghosyan and Hakob G. Sarukhanyan  (4375 reads)
Rating: (1.00/10)

Abstract: As one of the most successful applications of image analysis and understanding, face recognition has recently gained significant attention. Over the last ten years or so, it has become a popular area of research in computer vision and one of the most successful applications of image analysis and understanding. A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the ways to do this is by comparing selected facial features from the image and a facial database. Biometric face recognition, otherwise known as Automatic Face Recognition (AFR), is a particularly attractive biometric approach, since it focuses on the same identifier that humans use primarily to distinguish one person from another: their “faces”. One of its main goals is the understanding of the complex human visual system and the knowledge of how humans represent faces in order to discriminate different identities with high accuracy. Human face and facial feature detection have attracted a lot of attention because of their wide applications, such as face recognition, face image database management and human-computer interaction. So it is of interest to develop a fast and robust algorithm to detect the human face and facial features. This paper describes a visual object detection framework that is capable of processing images extremely rapidly while achieving high detection rates.

Keywords: Haar-like features, Integral Images, LEM of image- Line Edge Map, Mask size

Link:

DECREASING VOLUME OF FACE IMAGES DATABASE AND EFFICIENT FACE DETECTION ALGORITHM

Grigor A. Poghosyan and Hakob G. Sarukhanyan

http://foibg.com/ijita/vol17/ijita17-1-p08.pdf

A SURVEY OF NONPARAMETRIC TESTS FOR THE STATISTICAL ANALYSIS OF EVOLUTIONARY ...
By: Rafael Lahoz-Beltra, Carlos Perales-Gravan  (4776 reads)
Rating: (1.00/10)

Abstract: One of the main problems in the statistical analysis of Evolutionary Computation (EC) experiments is the ‘statistical personality’ of data. A main feature of EC algorithms is the sampling of solutions from one generation to the next. Sampling is based on Holland’s schema theory, having a greater probability to be chosen those solutions with best-fitness (or evaluation) values. In consequence, simulation experiments result in biased samples with non-normal, highly skewed, and asymmetric distributions. Furthermore, the main problem arises with the noncompliance of one of the main premises of the central limit theorem, invalidating the statistical analysis based on the average fitness of the solutions. In this paper, we address a tutorial or ‘How-to’ explaining the basics of the statistical analysis of data in EC. The use of nonparametric tests for comparing two or more medians combined with Exploratory Data Analysis is a good option, bearing in mind that we are only considering two experimental situations that are common in EC practitioners: (i) the performance evaluation of an algorithm and (ii) the multiple experiments comparison. The different approaches are illustrated with different examples (see http://bioinformatica.net/tests/survey.html) selected from Evolutionary Computation and the related field of Artificial Life.

Keywords: Evolutionary Computation, Statistical Analysis and Simulation.

ACM Classification Keywords: G.3 PROBABILITY AND STATISTICS

Link:

A SURVEY OF NONPARAMETRIC TESTS FOR THE STATISTICAL ANALYSIS OF EVOLUTIONARY COMPUTATIONAL EXPERIMENTS

Rafael Lahoz-Beltra?, Carlos Perales-Gravan?

http://foibg.com/ijita/vol17/ijita17-1-p07.pdf

A MAMDANI-TYPE FUZZY INFERENCE SYSTEM TO AUTOMATICALLY ASSESS ...
By: Gloria Sánchez–Torrubia, Carmen Torres–Blanc  (5283 reads)
Rating: (1.00/10)

Abstract: In education it is very important for both users and teachers to know how much the student has learned. To accomplish this task, GRAPHs (the eMathTeacher-compliant tool that will be used to simulate Dijkstra’s algorithm) generates an interaction log that will be used to assess the student’s learning outcomes. This poses an additional problem: the assessment of the interactions between the user and the machine is a time-consuming and tiresome task, as it involves processing a lot of data. Additionally, one of the most useful features for a learner is the immediacy provided by an automatic assessment. On the other hand, a sound assessment of learning cannot be confined to merely counting the errors; it should also take into account their type. In this sense, fuzzy reasoning offers a simple and versatile tool for simulating the expert teacher’s knowledge. This paper presents the design and implementation of three fuzzy inference systems (FIS) based on Mamdani’s method for automatically assessing Dijkstra’s algorithm learning by processing the interaction log provided by GRAPHs.

Keywords: Algorithm Simulation, Algorithm Visualization, Active and Autonomous Learning, Automatic Assessment, Fuzzy Assessment, Graph Algorithms.

ACM Classification Keywords: I.2.3 Artificial Intelligence: Deduction and Theorem Proving - Answer/reason extraction; Deduction (e.g., natural, rule-based); Inference engines; Uncertainty, "fuzzy," and probabilistic reasoning. K.3.2 Computers and Education: Computer and Information Science Education – computer science education, self-assessment. G.2.2 Discrete Mathematics: Graph Theory – graph algorithms, path and circuit problems.

Link:

A MAMDANI-TYPE FUZZY INFERENCE SYSTEM TO AUTOMATICALLY ASSESS DIJKSTRA’S ALGORITHM SIMULATION

Gloria Sánchez–Torrubia?, Carmen Torres–Blanc?

http://foibg.com/ijita/vol17/ijita17-1-p06.pdf

ADAPTIVE CODING SCHEME FOR RAPIDLY CHANGING COMMUNICATION CHANNELS
By: Gurgen Khachatrian  (4140 reads)
Rating: (1.00/10)

Abstract: In this paper we investigate the problem of reliable and efficient data transmission over rapidly changing communication channels. A typical example of such channel would be high-speed free space optical communication channel under various adverse atmospheric conditions, such as turbulence. We propose a new concept of developing an error correcting coding scheme in order to achieve an extremely reliable communication through atmospheric turbulence. The method is based on applying adaptive error correcting codes to recover lost information symbols caused by fading due to turbulent channel.

Keywords: error correcting codes.

ACM Classification Keywords: E.4 CODING AND INFORMATION THEORY.

Link:

ADAPTIVE CODING SCHEME FOR RAPIDLY CHANGING COMMUNICATION CHANNELS

Gurgen Khachatrian

http://foibg.com/ijita/vol17/ijita17-1-p05.pdf

THE ALGORITHM BASED ON METRIC REGULARITIES
By: Maria Dedovets, Oleg Senko  (4510 reads)
Rating: (1.00/10)

Abstract: The new pattern recognition method is represented that is based on collective solutions by systems of metric regularities. Unlike previous methods based on voting by regularities discussed technique does not include any constraints on geometric shape of regularities. Metric regularities searching is reduced to connected components calculating in special contiguity graph. Methods incorporate statistical validation of found metric regularities with the help of permutation test.

Keywords: pattern recognition, metric regularities, permutation test

ACM Classification Keywords: I.5 Pattern Recognition; I.5.2 Design Methodology – Classifier design and evaluation

Link:

THE ALGORITHM BASED ON METRIC REGULARITIES

Maria Dedovets, Oleg Senko

http://foibg.com/ijita/vol17/ijita17-1-p04.pdf

MATHEMATICAL MODEL OF THE CLOUD FOR RAY TRACING
By: Ostroushko et al.  (4531 reads)
Rating: (1.00/10)

Abstract: The three-dimensional computer graphics claimed today in many fields of the person activity: producing of computer games, TV, animation, advertise production, visualization systems of out-of-cockpit environment for transport simulators, CAD systems, scientific visualization, computer tomography etc. Especially high are requirements for realism of generated images. Realism rising leads to image detail increasing, necessity of shadows and light sources processing, the account of an environment transparency, generation of various special effects. Therefore in the field of a computer graphics researches are intensively conducted for the purpose of development of as much as possible effective models for three-dimensional scenes and fast methods of realistic image synthesis on their basis. One of the most promising methods is a ray tracing. The method provides possibility of synthesis of high realistic images; however it demands the large computational cost for the realization. The cloud synthesis model for a ray tracing is described. The model is directed to a real time graphics and allows raising realism of the synthesized image. Advantages of proposed model are the ability of working with cloud on different levels of detail, availability of bounding surfaces inside of cloud structure that allows to increase efficiency of an image synthesis algorithm, ease of cloud animation. Due to the great flexibility of a cloud model and high performance of its visualization algorithm these results can be used in real time visualization systems.

Keywords: cloud, procedural modeling, ray tracing, sphere, visualization algorithm.

ACM Classification Keywords: I.3.5 Computational Geometry and Object Modeling.

Link:

MATHEMATICAL MODEL OF THE CLOUD FOR RAY TRACING

Andrii Ostroushko, Nataliya Bilous, Andrii Bugriy, Yaroslav Chagovets

http://foibg.com/ijita/vol17/ijita17-1-p03.pdf

REPRESENTING TREE STRUCTURES BY NATURAL NUMBERS
By: Luengo et al.  (4131 reads)
Rating: (1.00/10)

Abstract: Transformation of data structures into natural numbers using the classical approach of gödelization process can be a very powerful tool in order to encode some properties of data structures. This paper presents some introductory ideas in order to study tree data structures under the prism of Gödel numbers, and presents a few examples of using this approach to trees.

Keywords: Data structures, Trees, Gödelization

ACM Classification Keywords: E.1 Data Structures - Trees

Link:

REPRESENTING TREE STRUCTURES BY NATURAL NUMBERS

Carmen Luengo, Luis Fernández, Fernando Arroyo

http://foibg.com/ijita/vol17/ijita17-1-p02.pdf

ON STRUCTURAL RECOGNITION WITH LOGIC AND DISCRETE ANALYSIS
By: Levon Aslanyan, Hasmik Sahakyan  (4307 reads)
Rating: (1.00/10)

Abstract: The paper addresses issues of special style structuring of learning set in pattern recognition area. Above the regular means of ranking of objects and properties, which also use the structure of learning set, the logic separation hypotheses is treated over the multi value features area, which structures the learning set and tries to recover more valuable relations for better recognition. Algorithmically the model is equivalent to constructing the reduced disjunctive normal form of Boolean functions. The multi valued case which is considered is as harder as the binary case but it uses approximately the same structures.

Keywords: Learning, Boolean function, logic separation.

ACM Classification Keywords: F.2.2 Nonnumerical Algorithms and Problems: Computations on discrete structures

Link:

ON STRUCTURAL RECOGNITION WITH LOGIC AND DISCRETE ANALYSIS

Levon Aslanyan, Hasmik Sahakyan

http://foibg.com/ijita/vol17/ijita17-1-p01.pdf

METHODS OF COMPARATIVE ANALYSIS OF BANKS FUNCTIONING: CLASSIC AND NEW APPROACHE
By: Alexander Kuzemin, Vyacheslav Lyashenko  (5126 reads)
Rating: (1.00/10)

Abstract: General aspects of carrying out of the comparative analysis of functioning and development of banks are considered. Classical interpretation of an estimation of efficient control by bank from the point of view of interrelation of its liquidity and profitableness is considered. Questions of existential dynamics in a system of comparative analysis of difficult economic processes and objects are generalised.

Keywords: bank, analysis, microsituation, statistical conclusion, nonlinear dynamics, Wilcoxon criterion.

ACM Classification Keywords: H.4.2. Information system Applications: Types of Systems Decision Support

Link:

METHODS OF COMPARATIVE ANALYSIS OF BANKS FUNCTIONING: CLASSIC AND NEW APPROACHES

Alexander Kuzemin, Vyacheslav Lyashenko

http://foibg.com/ijita/vol16/IJITA16-4-p07.pdf

EXTENDED ALGORITHM FOR TRANSLATION OF MSC-DIAGRAMS INTO PETRI NETS
By: Sergiy Kryvyy, Oleksiy Chugayenko  (4315 reads)
Rating: (1.00/10)

Abstract: The article presents an algorithm for translation the system, described by MSC document into ordinary Petri Net modulo strong bisimulation. Only the statical properties of MSC document are explored – condition values are ignored (guarding conditions are considered always true) and all loop boundaries are interpreted as <1,inf>. Obtained Petri Net can be later used for determining various system’s properties as statical as dynamic (e.g. liveness, boundness, fairness, trap and mutual exclusion detection). This net regains forth and back traceability with the original MSC document, so detected errors can be easily traced back to the original system. Presented algorithm is implemented as a working prototype and can be used for automatic or semi-automatic detection of system properties. The algorithm can be used for early defect detection in telecommunication, software and middleware developing. The article contains the example of algorithm usage for correction error in producer-consumer system.

Keywords: MSC, Petri Net, model checking, verification, RAD.

ACM Classification Keywords: D.2.4 Software/Program Verification - Formal methods, Model checking

Link:

EXTENDED ALGORITHM FOR TRANSLATION OF MSC-DIAGRAMS INTO PETRI NETS

Sergiy Kryvyy, Oleksiy Chugayenko

http://foibg.com/ijita/vol16/IJITA16-4-p06.pdf

COGNITIVE APPROACH IN CASTINGS’ QUALITY CONTROL
By: Polyakova et al.  (4589 reads)
Rating: (1.00/10)

Abstract: Every year production volume of castings grows, especially grows production volume of non-ferrous metals, thanks to aluminium. As a result, requirements to castings quality also increase. Foundry men from all over the world put all their efforts to manage the problem of casting defects. The authors suggest using cognitive approach to modeling and simulation. Cognitive approach gives us a unique opportunity to bind all the discovered factors into a single cognitive model and work with them jointly and simultaneously. The method of cognitive modeling (simulation) should provide the foundry industry experts a comprehensive instrument that will help them to solve complex problems such as: predicting a probability of the defects’ occurrence; visualizing the process of the defects’ forming (by using a cognitive map); investigating and analyzing direct or indirect “cause-and-effect” relations. The cognitive models mentioned comprise a diverse network of factors and their relations, which together thoroughly describe all the details of the foundry process and their influence on the appearance of castings’ defects and other aspects. Moreover, the article contains an example of a simple die casting model and results of simulation. Implementation of the proposed method will help foundry men reveal the mechanism and the main reasons of casting defects formation.

Keywords: castings quality management, casting defects, expert systems, computer diagnostics, cognitive model, modeling, simulation.

ACM Classification Keywords: I.6.5 Computing Methodologies - Simulation and Modelling Model Development - Modeling methodologies

Link:

COGNITIVE APPROACH IN CASTINGS’ QUALITY CONTROL

Irina Polyakova, Jürgen Bast, Valeriy Kamaev, Natalia Kudashova, Andrey Tikhonin

http://foibg.com/ijita/vol16/IJITA16-4-p05.pdf

PRESENTATION OF ONTOLOGIES AND OPERATIONS ON ONTOLOGIES IN FINITE-STATE ...
By: Sergii Kryvyi, Oleksandr Khodzinskyi  (5046 reads)
Rating: (1.00/10)

Abstract: A representation of ontology by using finite-state machine is considered. This representation allows introducing the operations on ontologies by using regular algebra of languages. The operations over ontologies allow automating the process of analysis and synthesis for ontologies and their component parts.

Keywords: ontology, operations, finite automata.

ACM Classification Keywords: I.2.4 Knowledge Representation Formalisms and Methods; F.4.1 Finite Automata

Link:

PRESENTATION OF ONTOLOGIES AND OPERATIONS ON ONTOLOGIES IN FINITE-STATE MACHINES THEORY

Sergii Kryvyi, Oleksandr Khodzinskyi

http://foibg.com/ijita/vol16/IJITA16-4-p04.pdf

WEBLOG CLUSTERING IN MULTILINEAR ALGEBRA PERSPECTIVE
By: Andri Mirzal  (5269 reads)
Rating: (1.00/10)

Abstract: This paper describes a clustering method for labeled link network (semantic graph) that can be used to group important nodes (highly connected nodes) along with their relevant link’s labels by using a technique borrowed from multilinear algebra known as PARAFAC tensor decomposition. In this kind of network, the adjacency matrix can not be used to fully describe all information about the network structure. We have to expand the matrix into 3-way adjacency tensor, so that not only the information about to which nodes a node connects to but by which link’s labels is also included. And by applying PARAFAC decomposition, we get two lists, nodes and link’s labels with scores attached to them for each decomposition group. So clustering process to get the important nodes along with their relevant labels can be done simply by sorting the lists in decreasing order. To test the method, we construct labeled link network by using blog's dataset, where the blogs are the nodes and labeled links are the shared words among them. The similarity measures between the results and standard measures look promising, especially for two most important tasks, finding the most relevant words to blogs query and finding the most similar blogs to blogs query, about 0.87.

Keywords: Blogs, Clustering Method, Labeled-link Network, PARAFAC Decomposition.

ACM Classification Keywords: I.7.1 Document management

Link:

WEBLOG CLUSTERING IN MULTILINEAR ALGEBRA PERSPECTIVE

Andri Mirzal

http://foibg.com/ijita/vol16/IJITA16-4-p03.pdf

PARALLELIZATION METHODS OF LOGICAL INFERENCE FOR CONFLUENT RULE-BASED SYSTEM
By: Irene Artemieva, Michael Tyutyunnik  (4026 reads)
Rating: (1.00/10)

Abstract: The article describes the research aimed at working out a program system for multiprocessor computers. The system is based on the confluent declarative production system. The article defines some schemes of parallel logical inference and conditions affecting scheme choice. The conditions include properties of a program information graph, relations between data objects, data structures and input data as well.

Keywords: logical Inference, parallel rule-based systems

ACM Classification Keywords: D 3.2 – Constraint and logic languages, I 2.5 Expert system tools and techniques.

Link:

PARALLELIZATION METHODS OF LOGICAL INFERENCE FOR CONFLUENT RULE-BASED SYSTEM

Irene Artemieva, Michael Tyutyunnik

http://foibg.com/ijita/vol16/IJITA16-4-p02.pdf

CLASSIFICATION OF HEURISTIC METHODS IN COMBINATORIAL OPTIMIZATION
By: Sergii Sirenko  (4669 reads)
Rating: (1.00/10)

Abstract: An important for the scientific as well as the industrial world is the field of combinatorial optimization. These problems arise in many areas of computer science and other disciplines in which computational methods are applied, such as artificial intelligence, operation research, bioinformatics and electronic commerce. Many of combinatorial optimization problems are NP-hard and in this field heuristics often are the only way to solve the problem efficiently, despite the fact that the heuristics represent a class of methods for which in general there is no formal theoretical justification of their performance. A lot of heuristic methods possessing different qualities and characteristics for combinatorial optimization problems were introduced. One of the approaches to the description and analysis of these methods is classification. In the paper a number of different characteristics for which it is possible to classify the heuristics for solving combinatorial optimization problems are proposed. The suggested classification is an extension of the previous work in the area. This work generalizes existing approaches to the heuristics’ classification and provides formal definitions for the algorithms’ characteristics on which the classes are based. The classification describes heuristic methods from different viewpoints. Among main considered aspects is decision making approach, structure complexity, solution spaces utilized, memory presence, trajectory-continuity, search landscape modification, and adaptation presence.

Keywords: combinatorial optimization, classification of methods, heuristics, metaheuristics.

ACM Classification Keywords: G.1.6 Numerical Analysis Optimization, I.2.8 Artificial Intelligence: Problem Solving, Control Methods, and Search – Heuristic methods, General Terms: Algorithms.

Link:

CLASSIFICATION OF HEURISTIC METHODS IN COMBINATORIAL OPTIMIZATION

Sergii Sirenko

http://foibg.com/ijita/vol16/IJITA16-4-p01.pdf

A GENETIC AND MEMETIC ALGORITHM FOR SOLVING THE UNIVERSITY COURSE TIMETABLE ...
By: Velin Kralev  (3928 reads)
Rating: (1.00/10)

Abstract: In this paper genetic and memetic algorithms as an approach to solving combinational optimization problems are presented. The key terms associated with these algorithms, such as representation, coding and evaluation of the solution, genetic operators for the crossing, mutation and reproduction, stopping criteria and others are described. Two developed algorithms (genetic and memetic) with defined computational complexity for each of them, are presented. These algorithms are used in solving the university course timetable problem. The methodology and the object of study are presented. The main objectives of the planned experiments are formulated. The conditions for conducting experiments are specified. The developed prototype and its functionality are briefly presented. The results are analyzed and appropriate conclusions are formulated. The future trends of work are presented.

Keywords: genetic algorithm, memetic algorithm, university course timetable problem.

Link:

A GENETIC AND MEMETIC ALGORITHM FOR SOLVING THE UNIVERSITY COURSE TIMETABLE PROBLEM

Velin Kralev

http://foibg.com/ijita/vol16/IJITA16-3-p08.pdf

ANALOGICAL MAPPING USING SIMILARITY OF BINARY DISTRIBUTED REPRESENTATIONS
By: Serge V. Slipchenko, Dmitri A. Rachkovskij  (4570 reads)
Rating: (1.00/10)

Abstract: We develop an approach to analogical reasoning with hierarchically structured descriptions of episodes and situations based on a particular form of vector representations – structure-sensitive sparse binary distributed representations known as code-vectors. We propose distributed representations of analog elements that allow finding correspondence between the elements for implementing analogical mapping, as well as analogical inference, based on similarity of those representations. The proposed methods are investigated using test analogs and the obtained results are as those of known mature analogy models. However, exploiting similarity properties of distributed representations provides a better scaling, enhances the semantic basis of analogs and their elements as well as neurobiological plausibility. The paper also provides a brief survey of analogical reasoning, its models, and representations employed in those models.

Keywords: analogy, analogical mapping, analogical inference, distributed representation, code-vector, reasoning, knowledge bases.

ACM Classification Keywords: I.2 ARTIFICIAL INTELLIGENCE, I.2.4 Knowledge Representation Formalisms and Methods, I.2.6 Learning (Analogies)

Link:

ANALOGICAL MAPPING USING SIMILARITY OF BINARY DISTRIBUTED REPRESENTATIONS

Serge V. Slipchenko, Dmitri A. Rachkovskij

http://foibg.com/ijita/vol16/IJITA16-3-p07.pdf

DISTANCE MATRIX APPROACH TO CONTENT IMAGE RETRIEVAL
By: Kinoshenko et al.  (4308 reads)
Rating: (1.00/10)

Abstract: As the volume of image data and the need of using it in various applications is growing significantly in the last days it brings a necessity of retrieval efficiency and effectiveness. Unfortunately, existing indexing methods are not applicable to a wide range of problem-oriented fields due to their operating time limitations and strong dependency on the traditional descriptors extracted from the image. To meet higher requirements, a novel distance-based indexing method for region-based image retrieval has been proposed and investigated. The method creates premises for considering embedded partitions of images to carry out the search with different refinement or roughening level and so to seek the image meaningful content.

Keywords: content image retrieval, distance matrix, indexing.

ACM Classification Keywords: H.3.3 Information Search and Retrieval: Search process

Link:

DISTANCE MATRIX APPROACH TO CONTENT IMAGE RETRIEVAL

Dmitry Kinoshenko, Vladimir Mashtalir, Elena Yegorova

http://foibg.com/ijita/vol16/IJITA16-3-p06.pdf

THE CASCADE NEO-FUZZY ARCHITECTURE USING CUBIC–SPKINE ACTIVATION FUNCTIONS
By: Yevgeniy Bodyanskiy, Yevgen Viktorov  (4801 reads)
Rating: (1.00/10)

Abstract: in the paper new hybrid system of computational intelligence called the Cascade Neo-Fuzzy? Neural Network (CNFNN) is introduced. This architecture has the similar structure with the Cascade-Correlation? Learning Architecture proposed by S.E. Fahlman and C. Lebiere, but differs from it in type of artificial neurons. CNFNN contains neo-fuzzy neurons, which can be adjusted using high-speed linear learning procedures. Proposed CNFNN is characterized by high learning rate, low size of learning sample and its operations can be described by fuzzy linguistic “if-then” rules providing “transparency” of received results, as compared with conventional neural networks. Using of cubic-spline membership functions instead of conventional triangular functions allows increasing accuracy of smooth functions approximation.

Keywords: artificial neural networks, constructive approach, fuzzy inference, hybrid systems, neo-fuzzy neuron, cubic-spline functions.

ACM Classification Keywords: I.2.6 Learning – Connectionism and neural nets.

Link:

THE CASCADE NEO-FUZZY ARCHITECTURE USING CUBIC–SPKINE ACTIVATION FUNCTIONS

Yevgeniy Bodyanskiy, Yevgen Viktorov

http://foibg.com/ijita/vol16/IJITA16-3-p05.pdf

TRAINED NEURAL NETWORK CHARACTERIZING VARIABLES FOR PREDICTING ...
By: Sotto et al.  (4115 reads)
Rating: (1.00/10)

Abstract: Many organic compounds cause an irreversible damage to human health and the ecosystem and are present in water resources. Among these hazard substances, phenolic compounds play an important role on the actual contamination. Utilization of membrane technology is increasing exponentially in drinking water production and waste water treatment. The removal of organic compounds by nanofiltration membranes is characterized not only by molecular sieving effects but also by membrane-solute interactions. Influence of the sieving parameters (molecular weight and molecular diameter) and the physicochemical interactions (dissociation constant and molecular hydrophobicity) on the membrane rejection of the organic solutes were studied. The molecular hydrophobicity is expressed as logarithm of octanol-water partition coefficient. This paper proposes a method used that can be used for symbolic knowledge extraction from a trained neural network, once they have been trained with the desired performance and is based on detect the more important variables in problems where exist multicolineality among the input variables.

Keywords: Neural Networks, Radial Basis Functions, Nanofiltration; Membranes; Retention.

ACM Classification Keywords: K.3.2 Learning (Knowledge acquisition)

Link:

TRAINED NEURAL NETWORK CHARACTERIZING VARIABLES FOR PREDICTING ORGANIC RETENTION BY NANOFILTRATION MEMBRANES

Arcadio Sotto, Ana Martinez, Angel Castellanos

http://foibg.com/ijita/vol16/IJITA16-3-p04.pdf

EXTENDED NETWORKS OF EVOLUTIONARY PROCESSORS
By: Mingo et al.  (4203 reads)
Rating: (1.00/10)

Abstract: This paper presents an extended behavior of networks of evolutionary processors. Usually, such nets are able to solve NP-complete problems working with symbolic information. Information can evolve applying rules and can be communicated though the net provided some constraints are verified. These nets are based on biological behavior of membrane systems, but transformed into a suitable computational model. Only symbolic information is communicated. This paper proposes to communicate evolution rules as well as symbolic information. This idea arises from the DNA structure in living cells, such DNA codes information and operations and it can be sent to other cells. Extended nets could be considered as a superset of networks of evolutionary processors since permitting and forbidden constraints can be written in order to deny rules communication.

Keywords: Networks of Evolutionary Processors, Membrane Systems, and Natural Computation.

ACM Classification Keywords: F.1.2 Modes of Computation, I.6.1 Simulation Theory, H.1.1 Systems and Information Theory

Link:

EXTENDED NETWORKS OF EVOLUTIONARY PROCESSORS

Luis Fernando de Mingo, Nuria Gómez Blas, Francisco Gisbert, Miguel A. Peña

http://foibg.com/ijita/vol16/IJITA16-3-p03.pdf

FAST LINEAR ALGORITHM FOR ACTIVE RULES APPLICATION IN TRANSITION P SYSTEMS
By: Javier Gil et al.  (4074 reads)
Rating: (1.00/10)

Abstract: Transition P systems are computational models inspired on basic features of biological membranes and the observation of biochemical processes. In these models, membrane contains objects multisets, which evolve according to given evolution rules. The basis on which the computation is based cellular membranes is the basic unit for the structure and functioning of all living beings: the biological cell. These models called P systems or membrane systems, are caused by the need to find new forms of calculation that exceed the limits set by the complexity theory in conventional computing, drawing mainly distributed operation, non-deterministic and massively parallel with which the cells behave. In the field of Transition P systems implementation, it has been detected the necessity to determine whichever time are going to take active evolution rules application in membranes. In addition, to have time estimations of rules application makes possible to take important decisions related to the hardware / software architectures design. In this paper we propose a new evolution rules application algorithm oriented towards the implementation of Transition P systems. The developed algorithm is sequential and, it has a linear order complexity in the number of evolution rules. Moreover, it obtains the smaller execution times, compared with the preceding algorithms. Therefore the algorithm is very appropriate for the implementation of Transition P systems in sequential devices.

Keywords: Natural Computing, Membrane computing, Transition P System, Rules Application Algorithms

ACM Classification Keywords: D.1.m Miscellaneous – Natural Computing

Link:

FAST LINEAR ALGORITHM FOR ACTIVE RULES APPLICATION IN TRANSITION P SYSTEMS

Francisco Javier Gil, Jorge Tejedor, Luis Fernández

http://foibg.com/ijita/vol16/IJITA16-3-p02.pdf

GENE CODIFICATION FOR NOVEL DNA COMPUTING PROCEDURES
By: Goni Moreno et al.  (5425 reads)
Rating: (1.00/10)

Abstract: The aim of the paper is to show how the suitable codification of genes can help to the correct resolution of a problem using DNA computations. Genes are the income data of the problem to solve so the first task to carry out is the definition of the genes in order to perform a complete computation in the best way possible. In this paper we propose a model of encoding data into DNA strands so that this data can be used in the simulation of a genetic algorithm based on molecular operations. The first problem when trying to apply an algorithm in DNA computing must be how to codify the data that the algorithm will use. With preciseness, the gene formation exposed in this paper allows us to join the codification and evaluation steps in one single stage. Furthermore, these genes turn out to be stable in a DNA soup because we use bond-free languages in their definition. Previous work on DNA coding defined bond-free languages which several properties assuring the stability of any DNA word of such a language. We prove that a bond-free language is necessary but not sufficient to codify a gene giving the correct codification. That is due to the fact that selection must be done based on a concrete gene characterization. This characterization can be developed in many different ways codifying what we call the fitness field of the gene. It is shown how to use several DNA computing procedures based on genes from single and double stranded molecules to more complex DNA structures like plasmids.

Keywords: DNA Computing, Bond-Free? Languages, Genetic Algorithms, Gene Computing.

ACM Classification Keywords: I.6. Simulation and Modeling, B.7.1 Advanced Technologies, J.3 Biology and Genetics

Link:

GENE CODIFICATION FOR NOVEL DNA COMPUTING PROCEDURES

Angel Goni Moreno, Paula Cordero, Juan Castellanos

http://foibg.com/ijita/vol16/IJITA16-3-p01.pdf

[prev]  Page: 45.6/66  [next]
1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65  66 
World Clock
Powered by Tikiwiki Powered by PHP Powered by Smarty Powered by ADOdb Made with CSS Powered by RDF powered by The PHP Layers Menu System
RSS Wiki RSS Blogs rss Articles RSS Image Galleries RSS File Galleries RSS Forums RSS Maps rss Calendars
[ Execution time: 0.17 secs ]   [ Memory usage: 7.56MB ]   [ GZIP Disabled ]   [ Server load: 0.20 ]