|
OPTIMISATION OF ROUTE-PLANNING UNDER INDEFINITE RISK CONDITIONS
By: Kuzemin Oleksandr, Berezhnoy Sergey, Dayub Yasir
(3260 reads)
Rating:

(1.00/10)
|
Abstract: This paper describes an algorithm of data transformation with a view to provide support for the decision
maker. The aim of the paper is to develop a multi-purpose algorithm of building sets of optimal routes, taking into
consideration most of the real factors that provoke risks. A simple and effective method of multicriteria
optimization was proposed and developed.
Keywords: emergency situations, microsituations, road conditions, weather conditions, objects of high danger,
multicriteria optimisation.
ACM Classification Keywords: H.1 Models and Principles – General
Link:
OPTIMISATION OF ROUTE-PLANNING UNDER INDEFINITE RISK CONDITIONS
Kuzemin Oleksandr, Berezhnoy Sergey, Dayub Yasir
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-3-p08.pdf
|
APPLICATION OF MATHEMATICAL INDUCTION FOR INHERITANCE LAW INERPRETATIONS
By: Assen Tochev, Vassil Guliashki
(4013 reads)
Rating:

(1.00/10)
|
Abstract: The purpose of this article is to obtain simple rule for applying the Inheritance law for the case of (own)
brothers/sisters by birth, and/or brothers/sisters uterine or through father. Using the mathematical induction a
result is obtained for n (own) brothers/sisters by birth and m brothers/sisters uterine or through father.
Keywords: Inheritance law, mathematical induction.
ACM Classification Keywords: A.0 General Literature - Conference proceedings; I. Computing methodologies,
I.2. Artificial Intelligence, I.2.1. Applications and expert systems, Subject descriptor: Law; H. Information systems,
H4. Information systems application, H.4.2. Types of systems, Subject descriptor: Decision support;
Link:
APPLICATION OF MATHEMATICAL INDUCTION FOR
INHERITANCE LAW INERPRETATIONS
Assen Tochev, Vassil Guliashki
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-3-p07.pdf
|
MODEL RESEARCH OF INTERACTION PROCESSES OF TEXT SPACES
By: Konstantin I. Belousov, Tatyana N. Galinskaya
(3067 reads)
Rating:

(1.00/10)
|
Abstract: The article discusses the problem of interaction of text spaces. When discussing the interaction of text
spaces we assume that there exists a certain text model. The technique of semantic charting and the method of
positional analysis allowed us to represent the successive-simultaneous semantic space of a text as its “semantic
outline”. Owing to the method of the prosodic analysis of a text, aimed at modeling its prosodic outline, there
appears the possibility to analyze the cooperative interactions of these relatively independent text spaces. The
system-approached research program presented in the work is aimed at the study of the text as a polyontological,
self-organizing spatiotemporal linguistic object. The multiaspect text analysis is grounded on a) the positional
analysis method, b) quantitative methods which in there turn comprise such methods as c) correlation methods,
which determine the text aspects’ level. By comparing and contrasting synchronically semantic connection
intensity and mean sound intensity of the obtained data we received the results that allow us to be more specific
in the discussion of the text structure as an evolving process. The search for explanatory tools of convergence,
divergence, intersection, overlapping of various text structures is the key to understanding the complex material,
ideal and social nature of text, its presentation as wholeness.
Keywords: system activity approach, modeling, positional analysis, semantic charts, semantic graph of a text.
Link:
MODEL RESEARCH OF INTERACTION PROCESSES OF TEXT SPACES
Konstantin I. Belousov, Tatyana N. Galinskaya
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-3-p06.pdf
|
THE EXPERIENCE OF DEVELOPING SOFTWARE FOR TYPOLOGICAL DATABASES ...
By: Vladimir Polyakov
(3526 reads)
Rating:

(1.00/10)
|
Abstract: In the present article we will discuss the experience of creating software for the typological database
“Languages of the World”. The DB “Languages of the World” is one of the biggest typological computer
resources. We have done a review of the software connected with the DB “Languages of the World”. The
following questions are discussed: compatibility of the versions, choice of the best structure of the data,
development of the content in newer versions of the DB, creation of bilingual version, correct citing. The main
lessons learnt from the project by the workgroup, are:
Long development and creation of different versions of the product during its life cycle (over 20 years), providing
its livability against the background of changing of operational systems and paradigms of programming makes us
seriously think about a technology of providing for compatibility between different versions of the product,
documenting of the code, preserving the key participants of the workgroup.
The structure of the DB is a secondary moment in the relation to the content. In the end, choice of a certain
structure of data presentation in a certain realization of the product is a question of comfortable programming.
Besides, choice of the structure of the data is in many situation defined by the environment of data storage, dates
and budget of the product.
Planning a long life cycle of a linguistic resource for scientific purposes must foresee tools of fixation and
archiving the inevitable changes of the content. Lack of such tools or links to the contents without invariant
binding lowers the quality and the value of the received scientific results.
The creation of the bilingual version of the product demanded thorough elaboration of the terminological part of
the DB, as well as linkage of the languages to the international system of coding. Along with it, the specificity of
Russian scientific linguistic school and a more detailed description of the languages of Eurasia in the DB
“Languages of the World” did not allow us to withdraw these contradictions completely.
The main scientific results received for the past 5 years with the use of the DB, are enumerated. The perspectives
of its future development and use are studied.
Keywords: language typology, linguistic database
Link:
THE EXPERIENCE OF DEVELOPING SOFTWARE FOR TYPOLOGICAL DATABASES
(ON THE EXAMPLE OF DB “LANGUAGES OF THE WORLD”)
Vladimir Polyakov
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-3-p05.pdf
|
ANALYZING THE LOCALIZATION OF LANGUAGE FEATURES WITH COMPLEX SYSTEMS TOOLS ...
By: Samuel F. Omlin
(3687 reads)
Rating:

(1.00/10)
|
Abstract: Half of the world’s languages are in danger of disappearing before the century ends. Efficient
protection of these languages is difficult as their fate depends on multiple factors. The role played by the
geographic situation of a language in its survival is still unclear. The following quantitative study focused on the
relation between the ‘vitality’ of a minority language and the linguistic structure of the neighboring languages. A
large sample of languages in Eurasia was considered. The languages were described based on a complex
system of typological features. The spatial distribution of the language features in the sample area was measured
by quantifying deviations from purely random configurations. Interactions between the linguistic features were
revealed. The obtained interaction network permitted to define a location “quality” index for a language
localization. This index was put in relation to corresponding vitality estimations from Unesco. A significant relation
could be established between these two variables. The degree of endangerment of the minority languages
studied seems effectively related to the linguistic structure of their neighboring languages. Beyond the particular
context of endangered languages, the proposed approach constitutes a promising tool to gain more knowledge
about the mechanisms that control the geographical distribution of linguistic features.
Keywords: Language competition, Complex systems, Interactions, Spatial distribution, Typological language
features.
ACM Classification Keywords: I.m Miscellaneous; J.5 Arts and Humanities – Linguistics; H.2.8 Database
Applications – Data mining, Scientific databases, Spatial databases and GIS.
Link:
ANALYZING THE LOCALIZATION OF LANGUAGE FEATURES WITH COMPLEX
SYSTEMS TOOLS AND PREDICTING LANGUAGE VITALITY
Samuel F. Omlin
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-3-p04.pdf
|
COMPARATIVE ANALYSIS OF PHYLOGENIC ALGORITHMS
By: Valery Solovyev, Renat Faskhutdinov
(3101 reads)
Rating:

(1.00/10)
|
Abstract The paper is dedicated to comparative analysis of phyogenetic algorithms used for linguistics tasks. At
present there are a lot of phylogenetic algorithms; however, there is no unanimous opinion on which of them
should be used. The paper suggests the model of language evolution trees and introduces a parameter to
characterize the topology of trees. The comparison of the main algorithms is made on the trees of various
topology. The paper displays that the UPGMA algorithm gives better results on the trees close to balanced ones.
It provides the explanation for a number of contradictive results, described in published works.
The problem of the input data choice and the relation between results and the number and type of parameters is
under consideration. The results obtained are also ambiguous. Typological databases “Jazyki mira” and WALS as
well as the method of computer modeling are used in the paper.
Keywords: language evolution, phylogenetic algorithms
Link:
COMPARATIVE ANALYSIS OF PHYLOGENIC ALGORITHMS
Valery Solovyev, Renat Faskhutdinov
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-3-p03.pdf
|
SPAM AND PHISHING DETECTION IN VARIOUS LANGUAGES
By: Liana Ermakova
(4033 reads)
Rating:

(1.00/10)
|
Abstract: The majority of existing spam filtering techniques suffers from several serious disadvantages. Some of
them provide many false positives. The others are suitable only for email filtering and may not be used in IM and
social networks. Therefore content methods seem to be more efficient. One of them is based on signature
retrieval. However it is not change resistant. There are enhancements (e.g. checksums) but they are extremely
time and resource consuming. That is why the main objective of this research is to develop a transforming
message detection method. To this end we have compared spam in various languages, namely English, French,
Russian and Italian. For each language the number of examined messages including spam and notspam was
about 1000. 135 quantitative features have been retrieved. Almost all these features do not depend on the
language. They underlie the first step of the algorithm based on support vector machine. The next stage is to test
the obtained results applying trigram approach. Proposed phishing detection technique is also based on SVM.
Quantitative characteristics, message structure and key words are used as features. The obtaining results
indicate the efficiency of the suggested approach.
Keywords: spam, corpus linguistics, phishing, filtering, text categorization.
ACM Classification Keywords: I.2.7 Text analysis
Link:
SPAM AND PHISHING DETECTION IN VARIOUS LANGUAGES
Liana Ermakova
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-3-p02.pdf
|
GRAMMATICAL PRIMING DOES FACILITATE VISUAL WORD NAMING, AT LEAST IN SERBIAN
By: Dejan Lalović
(4341 reads)
Rating:

(1.00/10)
|
Abstract: Starting from the seminal work in 1980s to more recent findings, literature review suggests
grammatical priming to be an elusive fenomenon, reliably obtained mostly in a lexical decision task and only
rarely in naming task. Prevalent conclusion derived from the aforementioned fact suggests the effects of
grammatical priming to be of less importance for online word processing as reflected by naming. However, this
goes against intuitive notion of grammatical information being especially valuable in processing richly-inflected,
free-word ordered language such as Serbian. The conclusion was challenged in a naming task in which
prepositions and personal pronouns were employed to prime target nouns and verbs. We also tested the effect of
prime-target asynchrony at 600ms and 250ms intervals, as the variable is known to invertly influence effects of
language priming. Delayed naming condition was used to provide a purer estimate of target processing time
afforded at the two asynchrony intervals in online naming. Analyses suggest effects of grammatical priming to be
both substantial and robust. The facilitation of 22 ms (25 ms at 600 ms asynchrony, 20 ms at 250 ms asynchrony)
provided by grammatical information was roughly twice as large as obtained in comparable studies in English.
The facilitation effect was not qualified by interaction with SOA and therefore should not be attributed to some
major strategic process associated with the longer SOA. We conclude grammatical priming in naming to be
possible, at least in case of richly-inflected, free word-ordered language, and more than one word class primed.
Online-delayed average latencies difference indicated slightly wider time window for target processing at the
shorter asynchrony. The fact requires caution in grammatical priming effects loci interpretation.
Keywords: grammatical priming; word naming.
ACM Classification Keywords: I.2 Artificial Intelligence; I.2.7 Natural Language Processing – Language parsing
and understanding.
Link:
GRAMMATICAL PRIMING DOES FACILITATE VISUAL WORD NAMING,
AT LEAST IN SERBIAN
Dejan Lalović
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-3-p01.pdf
|
MULTILINGUAL REDUCED N-GRAM MODELS
By: Tran Thi Thu Van and Le Quan Ha
(3498 reads)
Rating:

(1.00/10)
|
Abstract: Statistical language models should improve as the size of the n-grams increases from 3 to 5 or higher.
However, the number of parameters and calculations, and the storage requirement increase very rapidly if we
attempt to store all possible combinations of n-grams. To avoid these problems, the reduced n-grams’ approach
previously developed by O’Boyle? 1993 can be applied. A reduced n-gram language model can store an entire
corpus’s phrase-history length within feasible storage limits. Another theoretical advantage of reduced n-grams is
that they are closer to being semantically complete than traditional models, which include all n-grams. In our
experiments, the reduced n-gram Zipf curves are first presented, and compared with conventional n-grams for all
Irish, Chinese and English. The reduced n-gram model is then applied for large Irish, Chinese and English
corpora. For Irish, we can reduce the model size, compared to the 7-gram traditional model size, with a factor of
15.1 for a 7-million-word Irish corpus while obtaining 41.63% improvement in perplexities; for English, we reduce
the model sizes with factors of 14.6 for a 40-million-word corpus and 11.0 for a 500-million-word corpus while
obtaining 5.8% and 4.2% perplexity improvements; and for Chinese, we gain a 16.9% perplexity reductions and
we reduce the model size by a factor larger than 11.2. This paper is a step towards the modeling of Irish, Chinese
and English using semantically complete phrases in an n-gram model.
Keywords: Reduced n-grams, Overlapping n-grams, Weighted average (WA) model, Katz back-off, Zipf’s law.
ACM Classification Keywords: I. Computing Methodologies - I.2 ARTIFICIAL INTELLIGENCE - I.2.7 Natural
Language Processing - Speech recognition and synthesis
Link:
MULTILINGUAL REDUCED N-GRAM MODELS
Tran Thi Thu Van and Le Quan Ha
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-2-p07.pdf
|
THE ARGUMENT BASED COMPUTATION: SOLVING THE BINDING PROBLEM
By: Alona Soschen, Velina Slavova
(3391 reads)
Rating:

(1.00/10)
|
Abstract: In this paper, we further developed the argument-based model of syntactic operations that is argued to
represent the key to basic mental representations. This work concentrates on formal descriptions of the observed
syntax-semantics dependencies. We briefly review our up do date experimental work designed to test this
hypothesis, and offer the results of our most recent experiment. The results of our experiments confirmed that
semantic relations between the images in conceptual nets influence syntactic computation. The binding problem
that arises when the same noun can be represented either as Subject (ex. The cat chases the mouse) or Object
(ex. The mouse chases the cat
Introduction
), was successfully resolved.
Keywords: Cognitive Models of Language Phenomena, Formal Models in Language and Cognition,
Psycholinguistics and Psycho semantics
ACM Classification Keywords: ACM Classification Keywords: I.2 Artificial Intelligence, 1.2.0. Cognitive
simulation
Link:
THE ARGUMENT BASED COMPUTATION: SOLVING THE BINDING PROBLEM
Alona Soschen, Velina Slavova
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-2-p06.pdf
|
A FORMAL REPRESENTATION OF CONCEPT COMPOSITION
By: Daniel Schulzek, Christian Horn, Tanja Osswald
(3502 reads)
Rating:

(1.00/10)
|
Abstract: This paper centers on argument saturation in relational-noun compounds. We argue that these
compounds can be analyzed in terms of conceptual types, as introduced by Löbner 1985, to appear. He
distinguishes between sortal, individual, functional, and proper relational concepts. To describe argument
saturation in compounding, we use frames in the sense of Barsalou 1992 since frames give a decompositional
account of concepts and in particular reflect the conceptual types in their structure. Subsequently, we investigate
relational-noun compounds in German as derived from their conceptual types. That is, we analyze in how far the
conceptual types of the compound constituents determine the concept type of the compound as a whole. For
possessive constructions, Löbner, to appear argues that a construction with a functional head inherits the type
of the modifier. We demonstrate that for constructions with a relational head the case is less straightforward: the
construction inherits the relational dimension of the modifier and the non-uniqueness from the head noun.
However, we show that the combinations for compounds can follow complex compositional rules.
Keywords: word formation, frames, compounds, lexical semantics
ACM Classification Keywords: A.0 General Literature - Conference proceedings Languages, Theory
Link:
A FORMAL REPRESENTATION OF CONCEPT COMPOSITION
Daniel Schulzek, Christian Horn, Tanja Osswald
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-2-p05.pdf
|
COGNITIVE MODEL OF TIME AND ANALYSIS OF NATURAL LANGUAGE TEXTS
By: Xenia A. Naidenova, Marina I. Garina
(3508 reads)
Rating:

(1.00/10)
|
Abstract: The extension to new languages is a well known bottleneck for any text analyzing system. In this
paper, a cognitive model of time is proposed and the questions of extracting events and their time characteristics
from texts are discussed. The cognitive model of time due to its independence of concrete natural language can
be considered as a basis for constructing text mining systems intended for extracting temporary relations.
Keywords: Natural Language Processing, cognitive model, time model.
ACM Classification Keywords: I.2.7. Computing Methodologies - Artificial intelligence - Natural Language
Processing
Link:
COGNITIVE MODEL OF TIME AND ANALYSIS OF NATURAL LANGUAGE TEXTS
Xenia A. Naidenova, Marina I. Garina
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-2-p04.pdf
|
CONTEXT-BASED MODELLING OF SPECIALIZED KNOWLEDGE
By: Pilar León Araúz, Arianne Reimerink, Alejandro G. Aragón
(3330 reads)
Rating:

(1.00/10)
|
Abstract: EcoLexicon? is a terminological knowledge base (TKB) on the environment where different types of
information converge in a multimodal interface: semantic networks, definitions, contexts and images. It seeks to
meet both cognitive and communicative needs of different users, such as translators, technical writers or even
environmental experts. According to Meyer et al. 1992, TKBs should reflect conceptual structures in a similar
way to how concepts relate in the human mind. From a neurological perspective, Barsalou 2009: 1283 states
that a concept produces a wide variety of situated conceptualizations in specific contexts, which clearly
determines the type and number of concepts to be related to. The organization of semantic information in the
brain should thus underlie any theoretical assumption concerning the retrieval and acquisition of specialized
knowledge concepts as well as the design of specialized knowledge resources Faber, 2010. Furthermore, since
categorization itself is a dynamic context-dependent process, the representation and acquisition of specialized
knowledge should certainly focus on contextual variation. Context includes external factors (situational and
cultural) as well as internal cognitive factors, all of which can influence one another House, 2006: 342. This view
goes hand in hand with the perception of language as a kind of action, where the meaning of linguistic forms is
understood as a function of their use Reimerink et al., 2010. In this paper we briefly describe each module of our
resource and explain how EcoLexicon? has been contextualized according to conceptual and terminological
information. The conceptual contextualization of different entries in EcoLexicon? has been performed according to
role-based domains and contextual domains, whereas terminological contextualization is based on contextual
domains and use situations. In this way, context is two-fold, since we account for the referential context of
concepts in the real world and users’ own communicative and cognitive context.
Keywords: context, dynamism, reconceptualization, environmental knowledge, TKB.
ACM Classification Keywords: H.5.2 User interfaces – Natural language
Link:
CONTEXT-BASED MODELLING OF SPECIALIZED KNOWLEDGE1
Pilar León Araúz, Arianne Reimerink, Alejandro G. Aragón
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-2-p03.pdf
|
CONCEPTUAL MODELING IN SPECIALIZED KNOWLEDGE RESOURCES
By: Pamela Faber, Antonio San Martín
(3268 reads)
Rating:

(1.00/10)
|
Abstract: Conceptual modeling is the activity of formally describing aspects of the physical and social world
around us for purposes of understanding and communication. The conceptual modeler thus has to determine
what aspects of the real world to include, and exclude, from the model, and at what level of detail to model each
aspect Kotiadis and Robinson, 2008. The way that this is done depends on the needs of the potential users or
stakeholders, the domain to be modeled, and the objectives to be achieved. A principled set of conceptual
modeling techniques are thus a vital necessity in the elaboration of resources that facilitate knowledge acquisition
and understanding.
In this respect, the design and creation of terminological databases for a specialized knowledge domain is
extremely complex since, ideally, the data should be interconnected in a semantic network by means of an
explicit set of semantic relations. Nevertheless, despite the acknowledged importance of conceptual organization
in terminological resources Puuronen, 1995, Meyer et al., 1997, Pozzi, 1999, Pilke, 2001, conceptual
organization does not appear to have an important role in their design. It is a fact that astonishingly few
specialized knowledge resources available on Internet contain information regarding the location of concepts in
larger knowledge configurations Faber et al., 2006.
Such knowledge resources do not take into account the dynamic nature of categorization, concept storage and
retrieval, and cognitive processing Louwerse and Jeuniaux, 2010, Aziz-Zadeh and Damasio, 2008, Patterson
et al., 2007, Gallese and Lakoff, 2005. Recent theories of cognition reflect the assumption that cognition is
typically grounded in multiple ways, e.g. simulations, situated action, and even bodily states. This means that a
specialized knowledge resource that facilitates knowledge acquisition should thus provide conceptual contexts or
situations in which a concept is conceived as part of a process or event. Since knowledge acquisition and
understanding requires simulation, this signifies that horizontal relations defining goal, purpose, affordance, and
result of the manipulation and use of an object are just as important, if not more so, than vertical generic-specific
and part-whole relations.
Within the context of recent theories of cognition, this paper examines the frame-based conceptual modeling
principles underlying EcoLexicon?, a multilingual knowledge base of environmental concepts
(http://ecolexicon.ugr.es/) Faber et al., 2005, 2006, 2007.
Keywords: conceptual modeling, terminological knowledge base, cognition, specialized knowledge
representation
ACM Classification Keywords: J.5 Arts and Humanities – Linguistics
Link:
CONCEPTUAL MODELING IN SPECIALIZED KNOWLEDGE RESOURCES
Pamela Faber, Antonio San Martín
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-2-p02.pdf
|
FREQUENCY EFFECTS ON THE EMERGENCE OF POLYSEMY AND HOMOPHONY
By: Gertraud Fenk-Oczlon, August Fenk
(3461 reads)
Rating:

(1.00/10)
|
Abstract: In this paper we try to answer the following questions: Why do frequently used words tend to polysemy
and homophony? And what comes first - frequency or the higher number of meanings per word? We shall stress
the key role of frequency in the emergence of polysemy and assume an interactive step-up initiated by frequency:
High frequency not only favors reduction processes of words or the bleaching of meanings that can result in
polysemy; it also plays a crucial role in the creation of metaphors or metonymies, i.e., the main sources of
polysemy. Only familiar or frequent source words/concepts tend to be used in metaphorical or metonymical
expressions. Through the conventionalization of the metaphors and metonymies, the source words get additional
meanings. They now can be used in a higher number of contexts what in turn favors a more frequent use.
A similar explanation might hold for the development of homophony: Shorter words are known for their tendency
to homophony Jespersen, 1933 and high token frequency. Our explanation: High frequency favors
backgrounding processes, such as vowel reduction, lenition and deletion of consonants or even of syllables. This
frequency-induced shortening of words often results in sound merger and in a relatively high proportion of
homophonous words, i.e., words encoding unrelated meanings.
Keywords: frequency, polysemy, homophony, metaphor, metonymy
Link:
FREQUENCY EFFECTS ON THE EMERGENCE OF POLYSEMY AND HOMOPHONY
Gertraud Fenk-Oczlon?, August Fenk
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-2-p01.pdf
|
RESEARCH PORTAL “REGIONS’ INNOVATIVE DEVELOPMENT”
By: Lyudmila Lyadova, Zhanna Mingaleva, Natalia Frolova
(3675 reads)
Rating:

(1.00/10)
|
Abstract: This paper presents a project, aimed to create an information analytic system to solve the problem of
organizing collective work of researchers, supporting their efficient cooperation on one of the topical problems in
the sphere of economy – the problem of region’s innovative development. The project supposes a creation of a
portal that provides possibilities of publication, search, analysis and cataloging data on stated subject matter, as
well as information exchange. In the system there should be presented not only publications, received from
different sources, but also work results of the researchers, participating in the project, particularly, suggested
models of innovative development of enterprises, economic sectors, regions, quantitative and qualitative
assessment of their innovational development level in conditions of, on the one part, integration and on the other
– intensification of competition. Special attention in the project is paid to the usage of up-to-date information
technologies in conducting researches. The software of the portal includes means of information search in
different sources, analytic processing of the information in accordance to developed methods. Access to the
portal will be provided for users of different categories (scientists, lecturers, students, specialists in public
authorities). The first stage is a creation of a research prototype of the system. Initial filling is expected to be
executed on the base of data, issued by project participants (particularly, method of complex assessment of
region’s innovative development, which is based on the economic and mathematical methods and models; model
of knowledge domain, built on the base of ontology and used for searching and analyzing papers and data; etc.).
Keywords: Innovations; Models of innovative development; Ontology; Intellectual search; Data analytic
processing; Web-technologies.
ACM Classification Keywords: H. Information Systems. H.3 Information storage and retrieval: H.3.5 Online
Information Services – Web-based services; H.3.6 Library Automation – Large text archives.
Link:
RESEARCH PORTAL “REGIONS’ INNOVATIVE DEVELOPMENT”
Lyudmila Lyadova, Zhanna Mingaleva, Natalia Frolova
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-1-p09.pdf
|
LARGE VLSI ARRAYS – POWER AND ARCHITECTURAL PERSPECTIVES
By: Adam Teman, Orly Yadid-Pecht and Alexander Fish
(3205 reads)
Rating:

(1.00/10)
|
Abstract: A novel approach to power reduction in VLSI arrays is proposed. This approach includes recognition of
the similarities in architectures and power profiles of different types of arrays, adaptation of methods developed
for one on others and component sharing when several arrays are embedded in the same system and mutually
operated. Two types of arrays are discussed: Image Sensor pixel arrays and SRAM bitcell arrays. For both types
of arrays, architectures and major sources of power consumption are presented and several examples of power
reduction techniques are discussed. Similarities between the architectures and power components of the two
types of arrays are displayed. A number of peripheral sharing techniques for systems employing both Image
Sensors and SRAM arrays are proposed and discussed. Finally, a practical example of a smart image sensor
with an embedded memory is given, using an Adaptive Bulk Biasing Control scheme. The peripheral sharing and
power saving techniques used in this system are discussed. This example was implemented in a standard 90nm
CMOS process and showed a 26% leakage reduction as compared to standard systems.
Keywords: VLSI Arrays, SRAM, Smart Image Sensors, Low Power, AB2C.
ACM Classification Keywords: B.3.1 Semiconductor Memories - SRAM, B.6 Logic Design – Memory Control
and Access, B.7 Integrated Circuits – VLSI, E.1 Data Structures – Arrays, I.4.1 Digitization and Image Capture
Link:
LARGE VLSI ARRAYS – POWER AND ARCHITECTURAL PERSPECTIVES
Adam Teman, Orly Yadid-Pecht? and Alexander Fish
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-1-p08.pdf
|
SELF-ASSEMBLY PROCESS FOR INTEGRATED CIRCUITS BASED ON CARBON NANOTUBES ...
By: David Moreno, Sandra Gómez, Paula Cordero
(3227 reads)
Rating:

(1.00/10)
|
Abstract: New methods are needed to create integrated circuits which are able to overcome the inherent
problems in the miniaturization process. These problems are mainly technological and economical;
photolithography is limited and the expensive building respectively. This paper proposes the basis for a new
manufacturing process of nanotechnological circuits based on semiconducting carbon nanotubes that work as
FET (Field Effect Transistor) and metallic carbon nanotubes that work as nanowires. This process is based on the
assembly of DNA tiles and lattices that guide the placement of carbon nanotubes to build electronic circuits. The
process takes place in a microfluidic device within its chambers. Building blocks are created based on NAND
logic gates. These building blocks are enabled to assemble AND, OR and NOT logic gates. The process of
assembling a XOR logic gate is explained, demonstrating how to apply the process in a concrete case.
Keywords: Carbon nanotubes, DNA lattice, FET, Microfluidic devices, Self-assembly process.
ACM Classification Keywords: B.7.1 Types and Design Styles – Advanced technologies
Link:
SELF-ASSEMBLY PROCESS FOR INTEGRATED CIRCUITS BASED ON CARBON
NANOTUBES USING MICROFLUIDIC DEVICES
David Moreno, Sandra Gómez, Paula Cordero
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-1-p07.pdf
|
VARIETIES OF BIOLOGICAL INFORMATION: A MOLECULAR RECOGNITION APPROACH TO ...
By: Jorge Navarro, Ángel Goñi-Moreno & Pedro C. Marijuán
(3156 reads)
Rating:

(1.00/10)
|
Abstract: Bioinformatic and systems biology developments should be accompanied not only by a plethora of
computer tools, but also by an in-depth reflection on the distinctive nature of biological information. In this work
we attempt a consistent approach to the multiple varieties of information in the living cell by starting out from the
conceptualization of molecular recognition phenomena. Subsequently, an elementary approach to the
“informational architectures” behind cellular complexity may be chartered. In the interplay of the different
informational architectures two cellular subsystems should be highlighted: on the one side the transcriptional
regulatory network, and on the other, the cellular signaling system that is in charge of the interrelationship with
the environment. The embodiment of functional agents and the peculiar handling of DNA sequences along the
evolutionary process will suggest a parallel with the von Neumann scheme of modern computers, including the
cellular capability to “rewrite the DNA rules” along ontogenetic development.
Keywords: Molecular recognition, Informational architectures, DNA addresses, Transcriptional regulatory
network, Cellular signaling system, von Neumann scheme.
ACM Classification Keywords: D. Software. D.1 Programming Techniques
Link:
VARIETIES OF BIOLOGICAL INFORMATION: A MOLECULAR RECOGNITION
APPROACH TO SYSTEMS BIOLOGY AND BIOINFORMATICS
Jorge Navarro, Ángel Goñi-Moreno? & Pedro C. Marijuán
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-1-p06.pdf
|
PIGMENTED RAT-BASED VISION FOR ARTIFICIAL INTELLIGENCE APPLICATIONS
By: Francisco J. Cisneros de los Rios et al.
(3078 reads)
Rating:

(1.00/10)
|
Abstract: One of the most important objectives of artificial vision is the development of bioinspired and
biomimetic robot vision as well as the development of bionic eyes for the blind. Depending on the specific
application different eye models can be used but the most ambitious is the development of a human-like eye.
However, human eye is extremely complex and if we even design such a visual device the amount of information
to process should be computationally intractable. Normally low-resolution image quality and low visual acuity
would be sufficient for our purposes, so simpler biological models could represent excellent
computational alternatives. In the present communication we propose to use the visual system of the rat and we
justify our proposal by proving that this model fits perfectly the requirements of our applications.
Link:
PIGMENTED RAT-BASED VISION FOR ARTIFICIAL INTELLIGENCE APPLICATIONS
Francisco J. Cisneros de los Rios , Isabel Martín Moreno-Cid? , Abel SanchezJimenez?
, Juan Castellanos, Fivos Panetsos
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-1-p05.pdf
|
LINEAR PROGRAM FORM FOR RAY DIFFERENT DISCRETE TOMOGRAPHY
By: Hasmik Sahakyan, Levon Aslanyan
(3398 reads)
Rating:

(1.00/10)
|
Abstract: A special quality of discrete tomography problem solutions that requires the ray difference is
considered. Two classes of reconstruction tasks of (0,1)-matrices with different rows are studied: matrices with
prescribed column and row sums and matrices with prescribed column sums only. Both cases are known as
algorithmically open problems. We reformulate them as integer programming problems. Depending on
parameters obtained, the Lagrangean relaxation model and then variable splitting technique, or a greedy
heuristics approaches are applied for getting approximate solutions. In later case an optimization version is
considered, where the objective is to maximize the number of pair-wise different row rays, which in case of
existence of a matrix, is equivalent to the requirement of row differences.
Keywords: discrete tomography, (0,1)-matrices, integer programming
ACM Classification Keywords: F.2.2 Nonnumerical Algorithms and Problems: Computations on discrete
structures
Link:
LINEAR PROGRAM FORM FOR RAY DIFFERENT DISCRETE TOMOGRAPHY
Hasmik Sahakyan, Levon Aslanyan
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-1-p04.pdf
|
MEMBRANE COMPUTING: NON DETERMINISTIC TECHNIQUE TO CALCULATE EXTINGUISHED ...
By: Alberto Arteta, Angel Castellanos, Ana Martinez
(3541 reads)
Rating:

(1.00/10)
|
Abstract: Within the membrane computing research field, there are many papers about software simulations and
a few about hardware implementations. In both cases, algorithms are implemented. These algorithms implement
membrane systems in software and hardware that try to take advantages of massive parallelism. P-systems are
parallel and non deterministic systems which simulate membranes behavior when processing information.
This papers describes the evolution rules application process and it presents software techniques for calculating
maximal multisets on every evolutionary step.
These techniques improve the best performance achieved by the p-systems when applying evolution rules.
Algorithms could stop being useful when the number of objects “n” in which they depends on, increases. By using
this technique, that specific problem can be overcome. The output can be given under a constant complexity
order. The complexity order might be constant under certain conditions, regardless the value “n”. In order to do
this, the proper use of memory is essential. This work will provide the details for building a structure. This
structure will allow us to improve performance in terms of time. Moreover this structure can be allocated in the
random access memory and/or the virtual memory
Keywords: P-systems, Parallel systems, Natural Computing, evolution rules application, set of patterns,
structure.
ACM Classification Keywords: D.1.m Miscellaneous – Natural Computing
Link:
MEMBRANE COMPUTING: NON DETERMINISTIC TECHNIQUE TO CALCULATE
EXTINGUISHED MULTISETS OF OBJECTS.
Alberto Arteta, Angel Castellanos, Ana Martinez
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-1-p03.pdf
|
IMPLEMENTING TRANSITION P SYSTEMS
By: Santiago Alonso, Luis Fernández, Víctor Martínez
(3221 reads)
Rating:

(1.00/10)
|
Abstract: Natural computing is a whole area where biological processes are simulated to get their advantages for
designing new computation models. Among all the different fields that are being developed, membrane
computing and, more specifically, P Systems, try to get the most out of the biological cell characteristics and of
the chemical processes that take place inside them to model a new computation system.
There have been great advances in this field, and there have been developed a lot of works that improved the
original one, developing new ideas to get the most of the different algorithms and architectures that could be used
for this new model. One of the most difficult areas is the actual implementation of these systems. There are some
works that try to implement P Systems by software simulations and there are some more that design systems that
implement them by using computer networks or specific hardaware like microcontrollers. All these
implementations demonstrate their validity but many of them had the lack of some main characteristics for P
Systems.
As continuation for some earlier published works, present work pretends to be the exposition of the design for a
complete new hardware circuit that may be used to develop a P System for general purpose, complying with the
two main characteristics that we consider more important: a high level of parallelism (which does these systems
specially indicated to solve NP problems) and the fact that they should be non deterministic.
Keywords: Transition P System, membrane computing, circuit design.
ACM Classification Keywords: D.1.m Miscellaneous – Natural Computing
Link:
IMPLEMENTING TRANSITION P SYSTEMS
Santiago Alonso, Luis Fernández, Víctor Martínez
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-1-p02.pdf
|
IMPROVING ACTIVE RULES PERFORMANCE IN NEW P SYSTEM COMMUNICATION ARCHITECTURES
By: Juan Alberto de Frutos et al.
(3651 reads)
Rating:

(1.00/10)
|
Abstract: Membrane systems are models of computation which are inspired by some basic features of biological
membranes. Transition P systems are very simple models. Many hardware and software architectures have been
proposed for implementing them. In particular, there are implementations in cluster of processors, in
microcontrollers and in specialized hardware. This work proposes an analysis of the P system in order to be able
to reduce the execution time of a given evolution step.
We present a solution for improving the time of working out the active rules subset of a membrane. This task is
critical for the entire evolution process efficiency because it is performed inside each membrane in every
evolution step. Therefore, we propose to carry out a static analysis over the P system. The collected information
is used for obtaining a decision tree for each membrane. During the execution time of the P system, active rules
of a membrane will be determined as a result of a classification problem from the corresponding decision tree. By
incorporating decision trees for this task, we will notice some improvements.
Keywords: Decision Tree, ID3, Active Rules, Transition P System
ACM Classification Keywords: I.2.6 Learning – Decision Tree; D.1.m Miscellaneous – Natural Computing
Link:
IMPROVING ACTIVE RULES PERFORMANCE IN NEW P SYSTEM
COMMUNICATION ARCHITECTURES
Juan Alberto de Frutos, Luis Fernández, Carmen Luengo, Alberto Arteta
http://www.foibg.com/ijitk/ijitk-vol04/ijitk04-1-p01.pdf
|
DEVELOPMENT AND ANALYSIS OF THE PARALLEL ANT COLONY OPTIMIZATION ALGORITHM ...
By: Leonid Hulianytskyi, Vitalina Rudyk
(3004 reads)
Rating:

(1.00/10)
|
Abstract: Parallel ant colony optimization algorithm for solving protein tertiary structure prediction problem given
its amino acid sequence is introduced. The efficiency of developed algorithm is studied and the results of
computational experiment on the SCIT supercomputer clusters are discussed.
Keywords: combinatorial optimization, protein tertiary structure prediction, ant colony optimization, parallel
algorithms, SCIT supercomputer.
Link:
DEVELOPMENT AND ANALYSIS OF THE
PARALLEL ANT COLONY OPTIMIZATION ALGORITHM
FOR SOLVING THE PROTEIN TERTIARY STRUCTURE PREDICTION PROBLEM
Leonid Hulianytskyi, Vitalina Rudyk
http://www.foibg.com/ijita/vol21/ijita21-04-p09.pdf
|
|
|