### List of technical reports of year

Antonio, Frangioni and Fabrizio, Lacalandra
The Italian and European electricity markets are experiencing a crisis caused by sharp increases in gas prices, which are reflected in dramatically higher final cost of electricity for all consumers w.r.t. historical values. This, however, is only in part motivated by the higher price of the gas actually used for electricity production: a very significant contribution is instead due to the "Pay-as-Clear" (PaC) mechanism implemented in the Day-Ahead Electricity Market (DAM), whereby all producers are remunerated at the price of the most expensive--typically, gas-fired--unit. This has led to a surge of the interest in the development of mechanisms capable of decoupling the price paid to the units whose production cost depends on that of fuel--and that therefore must be able to track that to ensure the economic compatibility of their operations--from those for which this is not the case. However, since both types of units participate in satisfying the same demand, this is technically complex. Motivated by this highly compelling application we propose the concept of Segmented Pay-as-Clear (SPaC) market, introducing a new family of market clearing problems--in fact, a relatively straightforward modification of standard ones--that has the potential to achieve such a decoupling without losing the crucial features of the PaC, i.e., that of providing both long- and short-term price signals. The approach is based on dynamically partitioning demand across the segmented markets, where the partitioning is endogenous, i.e., controlled by the model variables, and is chosen to minimise the total system cost. The thusly modified model belongs to the family of Bilevel Programming problems with a non-linear non convex objective function, or more generally a Mathematical Program with Complementarity Constraints; these problems have a higher computational complexity than those corresponding to the standard PaC, but in the same ballpark as the models routinely used in real-world DAMs to represent "nonstandard" requirements, e.g., the unique buying price in the Italian DAM. Thus, SPaC models should still be solvable in a time compatible with market operation with appropriate algorithmic tools. Like all market models, SPaC is not immune to strategic bidding techniques, but some theoretical results indicate that, under the right conditions, the effect of these could be limited. An initial experimental analysis of the proposed models, carried out through Agent Based simulations, seems to indicate a good potential for significant system cost reductions and an effective decoupling of the two markets. |

Antonio, Frangioni and Fabrizio, Lacalandra
Il mercato elettrico italiano ed europeo sta vivendo una situazione di stress causata dai forti incrementi del prezzo del gas, che si riflettono sul costo finale dell'energia elettrica per i consumatori. Tale incremento, però, è in effetti superiore a quanto strettamente dovuto al rincaro del gas effettivamente usato per la produzione elettrica. Questo in ragione del meccanismo Pay-as-Clear (PaC) implementato nel Mercato Day-Ahead dell'energia (DAM), per il quale tutti i produttori vengono remunerati al prezzo dell'unità più costosa---tipicamente quelle, appunto, a gas. Si rivela quindi necessario provare a disaccoppiare le unità di produzione (UP) il cui costo dipende da quello del combustibile, e che quindi devono poter seguire questo ai fini della compatibilit\`a economica delle loro operazioni, da quelle il cui costo di produzione è sostanzialmente costante e non dipende da tali dinamiche. Poiché entrambi i tipi di unità partecipano alla soddisfazione della stessa domanda, questo è però tecnicamente complesso. In questa nota si propone una famiglia di problemi di clearing del DAM (in effetti, una modifica di quelli esistenti) che ha la possibilità di ottenere un tale disaccoppiamento senza perdere le utili caratteristiche del PaC in termini di fornire segnali di prezzo indispensabili sia nel lungo che nel breve periodo. L'approccio si basa sul partizionare dinamicamente la domanda sui due mercati, ove la partizione è una variabile del modello e viene scelta per minimizzare il costo totale di sistema. Il problema così modificato risulta della famiglia dei problemi di Programmazione Bilevel con funzione obiettivo nonlineare nonconvessa, o più in generale un Mathematical Program with Complementarity Constraints; questi problemi hanno una complessità computazionale superiore a quelli attualmente utilizzati, ma sono ancora risolubili in tempi compatibili con l'operatività del mercato con opportuni strumenti algoritmici. Come tutti i modelli di mercato anche quelli proposti non sono immuni a tecniche di strategic bidding, ma alcuni risultati teorici indicano che, nelle giuste condizioni, l'effetto di questi potrebbe essere limitato. Una prima analisi sperimentale dei modelli proposti, effettuata attraverso simulazioni di tipo Agent Based, sembra indicare un buon potenziale per ottenere significative riduzioni del costo di sistema ed un efficace disaccoppiamento dei due mercati. |

Bonchi, Filippo and Di Giorgio, Alessandro and Santamaria, Alessio
Rig categories with finite biproducts are categories with two monoidal products, where one is a biproduct and the other distributes over it. In this report we present tape diagrams, a sound and complete diagrammatic language for rig categories with finite biproducts, which can be thought intuitively as string diagrams of string diagrams. |

Astorino, Annabella and Frangioni, Antonio and Gorgone, Enrico and Manca, Benedetto
Separating two finite sets of points in a Euclidean space is a fundamental problem in classification. Customarily linear separation is used, but nonlinear separators such as spheres have been shown to have better performances in some tasks, such as edge detection in images. We exploit the relationships between the more general version of the spherical separation, where we use general ellipsoids, and the SVM model with quadratic kernel to propose a new classification approach. The implementation basically boils down to adding a SDP constraint to the standard SVM model in order to ensure that the chosen hyperplane in the feature space represents a non-degenerate ellipsoid in the input space; albeit being somewhat more costly than the original formulation, this still allows to exploit many of the techniques developed for SVR in combination with SDP approaches. We test our approach on several classification tasks, among which the edge detection problem for gray-scale images, proving that the approach is competitive with both the spherical classification one and the quadratic-kernel SVM one without the ellipsoidal restriction. |

Giacomo, Lanza and Mauro, Passacantando and Maria Grazia, Scutellà
The paper deals with a sequencing and routing problem originated by a real-world application context. The problem consists in defining the best sequence of locations to visit within a warehouse for the storage and/or retrieval of a given set of items during a specified time horizon, where the storage/retrieval location of an item is given. Picking and put away of items are simultaneously addressed, by also considering some specific requirements given by the layout design and operating policies which are typical in the kind of warehouses under study. Specifically, the considered sequencing policy prescribes that storage locations must be replenished or emptied one at a time by following a specified order of precedence. Moreover, two fleet of vehicles are used to perform retrieving and storing operations, whose routing is restricted to disjoint areas of the warehouse. We model the problem as a constrained multicommodity flow problem on a space-time network, and we propose a Mixed-Integer Linear Programming formulation, whose primary goal is to minimize the time traveled by the vehicles during the time horizon. Since large-size realistic instances are hardly solvable within the time limit commonly imposed in the considered application context, a matheuristic approach based on a time horizon decomposition is proposed. Finally, we provide an extensive experimental analysis aiming at identifying suitable parameter settings for the proposed approach, and testing the matheuristic on particularly hard realistic scenarios. The computational experiments show the efficacy and the efficiency of the proposed approach. |

Fioriti, Davide and Frangioni, Antornio and Poli, Davide
Energy communities (ECs) are essential policy tools to meet the Energy Transition goals, as they can promote renewable energy sources, demand side management, demand response and citizen participation in energy matters. However, to fully unleash their potential, their design and scheduling requires a coordinated technical operation that the community itself may be ill-equipped to manage, in particular in view of the mutual technical and legal constraints ensuing from a coordinated design. Aggregators and Energy Service COmpanies (ESCOs) can perform this support role, but only provided that their goals are aligned to those of the community, not to incur in the agency problem. In this study, we propose a business model for aggregators of ECs, and its corresponding technical optimization problem, taking into account all crucial aspects: i) alleviating the risk of the agency problem, ii) fairly distributing the reward awarded to the EC, iii) estimating the fair payment for the aggregator services, and iv) defining appropriate exit clauses that rule what happens when a user leaves the EC. A detailed mathematical model is derived and discussed, employing several fair and theoretically-consistent reward distribution schemes, some of which are, to the best of our knowledge, proposed here for the first time. A case study is developed to quantify the value of the aggregator and compare the coordinated solution provided by the aggregator with non-coordinated configurations, numerically illustrating the impact of the reward distribution schemes. The results show that, in the case study, the aggregator enables reducing costs by 16% with respect to a baseline solution, and enables reaching 52.5% renewable share and about 46% self/shared consumption, whereas these same numbers are only 28-35% for the non-coordinated case. Our results suggest that the aggregator fair retribution is around 16-24% the added benefit produced with respect to the non-coordinated solution, and that stable reward distribution schemes such as Shapley/Core or Nucleolus are recommended. Moreover, the results highlight the unwanted effect that some non-cooperative ECs may have an added benefit without providing any positive effect to the power system. Our work lays the foundations for future studies on business models of aggregators for ECs and provides a methodology and preliminary results that can help policy makers and developers in tailoring national-level policies and market-offerings. |

Cacciola, Matteo and Frangioni, Antonio and Galli, Laura and Stea, Giovanni
Mobile cellular networks play a pivotal role in emerging Internet of Things (IoT) applications, such as vehicular collision alerts, malfunctioning alerts in Industry-4.0 manufacturing plants, periodic distribution of coordination information for swarming robots or platooning vehicles, etc. All these applications are characterized by the need of routing messages within a given local area (geographic proximity) with constraints about both timeliness and reliability (i.e., probability of reception). This paper presents a Non-Convex Mixed-Integer Nonlinear Programming model for a routing problem with probabilistic constraints on a wireless network. We propose an exact approach consisting of a branch-and-bound framework based on a novel Lagrangian decomposition to derive lower bounds. Preliminary experimental results indicate that the proposed algorithm is competitive with state-of-the-art general-purpose solvers, and can provide better solutions than existing highly tailored ad-hoc heuristics to this problem. |

Murilo, Reolon Scuzziato and Erlon Cristian, Finardi and Antonio, Frangioni
The high penetration of intermittent renewable generation has prompted the development of Stochastic Hydrothermal Unit Commitmentc(SHUC) models, which are more difficult to be solved than their thermal-basedccounterparts due to hydro generation constraints and inflow uncertainties.cThis work presents a SHUC model applied in centralized cost-based dispatch, where the uncertainty is related to the water availability in reservoirs and demand. The SHUC is represented by a two-stage stochastic model, formulated as a large-scale mixed-binary linear programming problem. The solution strategy is divided into two steps, performed sequentially, with intercalated iterations to find the optimal generation schedule. The first step is the Lagrangian Relaxation (LR) approach. The second step is given by a Primal Recovery based on LR solutions and a heuristic based on Benders' Decomposition. Both steps benefit from each other, exchanging information over the iterative process. We assess our approach in terms of the quality of the solutions and running times on space and scenario LR decompositions. The results show the advantage of our primal recovery technique compared to solving the problem via MILP solver. This is true already for the deterministic case, and the advantage grows as the problem’s size (number of plants and/or scenarios) does. |

Ghezelsoflu, Ali and Di Francesco, Massimo and Frangioni, Antonio and Zuddas, Paola
We investigate a routing problem arising in the domain of drayage operations. to determine mimimum-cost vehicle routes in several periods. We adapt a set-covering model, which is solved either with all feasible routes by an off-the-shelf MIP solver, or by and a Price-and-Branch algorithm in which the pricing problem is a formulated as a collection of shortest path problems in tailor-made auxiliary acyclic networks. We propose a new arc-flow formulation based on the previous auxiliary networks and show that solving it by a MIP solver is usually preferable. Finally, we characterize how possible changes in flexibility levels affect routing costs. |

Brodo, Linda and Bruni, Roberto and Falaschi, Moreno
In the area of Natural Computing, reaction systems are a qualitative abstraction inspired by the functioning of living cells, suitable to model the main mechanisms of biochemical reactions. This model has already been applied and extended successfully to various areas of research. Reaction systems interact with the environment represented by the context, and pose problems of implementation, as it is a new computation model. In this paper we consider the link-calculus, which allows to model multiparty interaction in concurrent systems, and show that it allows to embed reaction systems, by representing the behaviour of each entity and preserving faithfully their features. We show the correctness and completeness of our embedding. We illustrate our framework by showing how to embed a lac operon regulatory network. Finally, our framework can contribute to increase the expressiveness of reaction systems, by exploiting the interaction among different reaction systems. |

Bigi, Giancarlo and Passacantando, Mauro
An algorithm for solving quasi-equilibrium problems (QEPs) is proposed relying on the sequential inexact resolution of equilibrium problems. First, we reformulate QEP as the fixed point problem of a set-valued map and analyse its Lipschitz continuity under strong monotonicity assumptions. Then, a few classes of QEPs satisfying these assumptions are identified. Finally, we devise an algorithm that computes an inexact solution of an equilibrium problem at each iteration and we prove its global convergence. |

Carosi, Samuela and Frangioni, Antonio and Galli, Laura and Girardi, Leopoldo and Vallese, Giuliano
Planning a public transportation system is a complex process, which is usually broken down in several phases, performed in sequence. Most often, the trips required to cover a service with the desired frequency (headway) are decided early on, while the vehicles needed to cover these trips are determined at a later stage. This potentially leads to requiring a larger number of vehicles (and, therefore, drivers) that would be possible if the two decisions were performed simultaneously. We propose a multicommodity-flow type model for integrated timetabling and vehicle scheduling. Since the model is large-scale and cannot be solved by off-the-shelf tools with the efficiency required by planners, we propose a diving-type matheuristic approach for the problem. We report on the efficiency and effectiveness of two variants of the proposed approach, differing on how the continuous relaxation of the problem is solved, to tackle real-world instances of bus transport planning problem originating from customers of M.A.I.O.R., a leading company providing services and advanced decision-support systems to public transport authorities and operators. The results show that the approach can be used to aid even experienced planners in either obtaining better solutions, or obtaining them faster and with less effort, or both. |

Galli, Laura and Letchford, Adam N. On Linearising Mixed-Integer Quadratic Programs via Bit Representation. Technical Report del Dipartimento di Informatica . Università di Pisa, Pisa, IT.
It is well known that, under certain conditions, one can use bit representation to transform both integer quadratic programs and mixed-integer bilinear programs into mixed-integer linear programs (MILPs), and thereby render them easier to solve using standard software packages. We show how to convert a more general family of mixed-integer quadratic programs to MILPs, and present several families of strong valid linear inequalities that can be used to strengthen the continuous relaxations of the resulting MILPs. |

Antonio, Frangioni and Laura, Galli
Effectively sharing resources requires solving complex decision problems. This requires constructing a mathematical model of the underlying system, and then applying appropriate mathematical methods to find an optimal solution of the model, which is ultimately translated into actual decisions. The development of mathematical tools for solving optimization problems dates back to Newton and Leibniz, but it has tremendously accelerated since the advent of digital computers. Today, optimization is an inter-disciplinary subject, lying at the interface between management science, computer science, mathematics and engineering. This chapter offers an introduction to the main theoretical and software tools that are nowadays available to practitioners to solve the kind of optimization problems that are more likely to be encountered in the context of this book. Using, as a case study, a simplified version of the bike sharing problem, we guide the reader through the discussion of modelling and algorithmic issues, concentrating on methods for solving optimization problems to proven optimality. |

Antonio, Frangioni
We review the basic ideas underlying the vast family of algorithms for nonsmooth convex optimization known as "bundle methods|. In a nutshell, these approaches are based on constructing models of the function, but lack of continuity of first-order information implies that these models cannot be trusted, not even close to an optimum. Therefore, many different forms of stabilization have been proposed to try to avoid being led to areas where the model is so inaccurate as to result in almost useless steps. In the development of these methods, duality arguments are useful, if not outright necessary, to better analyze the behaviour of the algorithms. Also, in many relevant applications the function at hand is itself a dual one, so that duality allows to map back algorithmic concepts and results into a "primal space" where they can be exploited; in turn, structure in that space can be exploited to improve the algorithms' behaviour, e.g. by developing better models. We present an updated picture of the many developments around the basic idea along at least three different axes: form of the stabilization, form of the model, and approximate evaluation of the function. |

van Ackooij, Wim and Danti Lopez, Irene and Frangioni, Antonio and Lacalandra, Fabrizio and Tahanan, Milad
The Unit Commitment problem in energy management aims at finding the optimal production schedule of a set of generation units, while meeting various system-wide constraints. It has always been a large-scale, non-convex, difficult problem, especially in view of the fact that, due to operational requirements, it has to be solved in an unreasonably small time for its size. Recently, growing renewable energy shares have strongly increased the level of uncertainty in the system, making the (ideal) Unit Commitment model a large-scale, non-convex and uncertain (stochastic, robust, chance-constrained) program. We provide a survey of the literature on methods for the Uncertain Unit Commitment problem, in all its variants. We start with a review of the main contributions on solution methods for the deterministic versions of the problem, focussing on those based on mathematical programming techniques that are more relevant for the uncertain versions of the problem. We then present and categorize the approaches to the latter, while providing entry points to the relevant literature on optimization under uncertainty. This is an updated version of the paper "Large-scale Unit Commitment under uncertainty: a literature survey" that appeared in 4OR 13(2), 115--171 (2015); this version has over 170 more citations, most of which appeared in the last three years, proving how fast the literature on uncertain Unit Commitment evolves, and therefore the interest in this subject. |

Di Francesco Maesa, Damiano
In this paper we explain the basics of Bitcoin protocol and the state of the art of the main attacks to it. We first present an overview of digital currencies, showing what they are and the social need they aim to satisfy. We then focus on the main digital currency up to date, Bitcoin. We treat the basics of the protocol showing what are addresses and transactions and how they are used in a distributed consensus protocol to build the blockchain. After that the main part of this paper presents the state of the art of the three main attacks on the protocol: fraudulent mining techniques, double spending attempts and deanonymization attacks. |

Virdis, Antonio and Stea, Giovanni and Sabella, Dario and Caretti, Marco
Almost Blank Subframes (ABS) have been defined in LTE as a means to coordinate transmissions in heterogeneous networks (HetNets), composed of macro and micro eNodeBs: the macro issues ABS periods, and refrains from transmitting during ABSs, thus creating interference-free subframes for the micros. Micros report their capacity demands to the macro via the X2 interface, and the latter provisions the ABS period accordingly. Existing algorithms for ABS provisioning usually share resources proportionally among HetNet nodes in a long-term perspective (e.g., based on traffic forecast). We argue instead that this mechanism can be exploited to save power in the HetNet: in fact, during ABSs, the macro consumes less power, since it only transmits pilot signals. Dually, the micros may inhibit data transmission themselves in some subframes, and optimally decide when to do this based on knowledge of the ABS period. This allows us to define a power saving framework that works in the short term, modifying the ABS pattern at the fastest possible pace, serving the HetNet traffic at reduced power cost. Our framework is designed using only standard signaling. Simulations show that the algorithm consumes less power than its competitors, especially at low loads, and improves the UE QoS. |

Scuzziato, Murilo Reolon and Finardi, Erlon Cristian and Frangioni, Antonio
Solving very-large-scale optimization problems frequently require to decompose them in smaller subproblems, that are iteratively solved to produce useful information. One such approach is the Lagrangian Relaxation (LR), a broad range technique that leads to many different decomposition schemes. The LR supplies a lower bound of the objective function and useful information for heuristics aimed at constructing feasible primal solutions. In this paper, we compare the main LR strategies used so far for Stochastic Hydrothermal Unit Commitment problems, where uncertainty mainly concerns water availability in reservoirs and demand (weather conditions). This problem is customarily modeled as a two-stage mixed-integer optimization problem. We compare different decomposition strategies (unit and scenario schemes) in terms of quality of produced lower bound and running time. The schemes are assessed with various hydrothermal systems, considering different configuration of power plants, in terms of capacity and number of units. |

Brogi, Antonio and Forti, Stefano and Ibrahim, Ahmad
The design and management of Edge systems will proactively involve human intelligence at the Edge, according to a human-driven approach that increases productivity and improves usability. Due to its ubiquity and heterogeneity, the Edge will give to application administrators a more decisional role in application deployment and resource management. Final decisions on where to distribute application components should be informedly taken by them during the entire application lifecycle, accounting for compliance to QoS requirements. As a first step, this requires devising new tools that suitably abstract heterogeneity of edge systems, permit simulating different runtime scenarios and ease human-driven management of such systems by providing meaningful evaluation metrics. In this article, we discuss how human decision-making can be supported to solve QoS-aware management related challenges for Edge computing. |

Antonio, Frangioni and Bernard, Gendron and Gorgone, Enrico
We present and computationally evaluate a variant of the fast gradient method by Nesterov that is capable of exploiting information, even if approximate, about the optimal value of the problem. This information is available in some applications, among which the computation of bounds for hard integer programs. We show that dynamically changing the smoothness parameter of the algorithm using this information results in a better convergence profile of the algorithm in practice. |

Mukala, Patrick
Abstract. Numerous studies continue to explore the potential of social interactions between people in Free/Libre Open Source Software (FLOSS) environments. While the dynamics of interactions in these environments can be understood from different perspectives, we put a particular focus on any interactions resulting in knowledge transfer and acquisition. As learning platforms, FLOSS communities provide immense opportunities for improving software engineering skills. People who engage in FLOSS activities both acquire and improve their software development skills. For this reason, it is very helpful to understand how these learning interactions occur. In this paper, we make use of the decision miner in process mining to conduct our analysis. The purpose of such an endeavour is twofold. Firstly, we provide empirical insights into how people learn while exchanging emails in FLOSS mailing archives. Lastly, we go a step further by providing insights behind the motivation into learning participants' decisions on their learning paths. |

Bigi, Giancarlo and Passacantando, Mauro
A model for Nash-Cournot oligopolistic markets with concave cost functions and a differentiated commodity is analysed. Equilibrium states are characterized through Ky Fan inequalities. Relying on the minimization of a suitable merit function, a general algorithmic scheme for solving them is provided. Two concrete algorithms are therefore designed that converge under suitable convexity and monotonicity assumptions. The results of preliminary numerical tests on randomly generated markets are also reported. |

Brogi, Antonio and Forti, Stefano
Fog computing aims at extending the Cloud by bringing computational power, storage and communication capabilities to the edge of the network, in support of the IoT. Segmentation, distribution and adaptive deployment of functionalities over the continuum from Things to Cloud are challenging tasks, due to the intrinsic heterogeneity, hierarchical structure and very large scale infrastructure they will have to exploit. In this paper we propose a simple, yet general, model to support the QoS-aware deployment of multi-component IoT applications over Fog infrastructures. The model describes operational systemic qualities of the available infrastructure (latency and bandwidth), interactions among software components and Things, and business policies. Algorithms to determine eligible deployment plans for an application over a Fog infrastructure are presented. A Java tool, FogTorch, based on the proposed model has been prototyped. |

Mukala, Patrick
Process mining is a relatively new field that encompasses powerful data and process analytics techniques for understanding processes from event data. In addition to these main techniques, it provides means enabling a pictorial representation of the occurrence of events over time. By applying such visualizations to event data from Free/Libre Open Source Software (FLOSS) environments, we get a complete understanding of how certain activities take place within such environments over time. Particularly, given the increasing interests in learning paradigms present in FLOSS communities, we believe that a temporal visual representation of learning events can yield great benefits. In this paper, we make use of the dotted chart in process mining to model and present a representation of learning behaviours over time for FLOSS participants |

Mukala, Patrick
FLOSS environments have been proved to provide an interesting learning platform for software engineers. Research suggests that people partaking in both technical and non-technical activities in FLOSS prjects are more likely to positively improve their software engineering skills. To this end, there are propositions to involve computer science and software engineering students in formal higher institutions of learning, in participating in FLOSS projects in order to give them an opportunity to develop their programming capacity by working on real-life projects. While some empirical studies have been conducted to provide some lights on learning processes in FLOSS environments, there is limited or no work done pertaining to understanding social structures during this process of knowledge transfer and acquisition. In this paper, we make use of social network analysis techniques in order to provide insights related to the emerging of social structures from FLOSS repositories from an educational point of view. We hope that these educational structures will enhance both the understanding with regards to how learning occurs in these communities and especially, the frequency of participants' involvement that culminates into learning. |

Antonio, Frangioni and Fabio, Furini and Claudio, Gentile
We propose an improvement of the Approximated Projected Perspective Reformulation (AP^2R) of [Frangioni, Furini, Gentile, Computational Optimization and Applications, 2016] for the case in which constraints linking the binary variables exist. The new approach requires to solve the Perspective Reformulation (PR) once, and then use the corresponding dual information to reformulate the problem prior to applying AP^2R, thereby combining the root bound quality of the PR with the reduced relaxation computing time of AP^$R. Computational results for the cardinality-constrained Mean-Variance portfolio optimization problem show that the new approach is competitive with state-of-the-art ones. |

van Ackooij, Wim and Frangioni, Antonio
We propose a family of proximal bundle methods for minimizing sum-structured convex nondifferentiable functions which require two slightly uncommon assumptions, that are satisfied in many relevant applications: Lipschitz continuity of the functions and oracles which also produce upper estimates on the function values. In exchange, the methods: i) use upper models of the functions that allow to estimate function values at points where the oracle has not been called; ii) provide the oracles with more information about when the function computation can be interrupted, possibly diminishing their cost; iii) allow to skip oracle calls entirely for some of the component functions, not only at ``null steps'' but also at ``serious steps''; iv) provide explicit and reliable a-posteriori estimates of the quality of the obtained solutions; v) work with all possible combinations of different assumptions on the oracles. We also discuss introduction of constraints (or, more generally, of easy components) and use of (partly) aggregated models. |

Bigi , Giancarlo and Passacantando, Mauro Gap functions for quasi-equilibria. Technical Report del Dipartimento di Informatica, TR
An approach for solving quasi-equilibrium problems (QEPs) is proposed relying on gap functions, which allow reformulating QEPs as global optimization problems. The (generalized) smoothness properties of a gap function are analysed and an upper estimates of its Clarke directional derivative is given. Monotonicity assumptions on both the equilibrium and constraining bifunctions are a key tool to guarantee that all the stationary points of a gap function actually solve QEP. A few classes of constraints satisfying such assumptions are identified covering a wide range of situations. Relying on these results, a descent method for solving QEP is devised and its convergence proved. Finally, error bounds are given in order to guarantee the boundedness of the sequence generated by the algorithm. |

Frangioni, Antonio and Galli, Laura and Stea, Giovanni
It has been recently shown that the problem of routing a new packet flow in a computer network subject to a constraint on the worst-case end-to-end delay of its packets can be formulated as a Mixed-Integer Second-Order Cone Program (MI-SOCP), and solved with general-purpose tools on instances of realistic size in time compatible with real-time use. However, that result was obtained for only one of the classes of schedulers in use in today's networks, namely Strictly Rate-Proportional ones. Furthermore, that result relies on assuming that each link is "fully loaded", so that the reserved rate of the flow coincides with its guaranteed rate. These assumptions entail both simple latency expressions, and the fact that flows are isolated from each other as far as their end-to-end delay is concerned; that is, admitting a new flow does not increase the end-to-end delay of the existing ones. However, other classes of scheduling algorithms commonly found in current networks both yield more complex latency formulae and do not enforce strict flow independence. Furthermore, the delay actually depends on the guaranteed rate of the flow, which can be significantly larger than the reserved rate if the network is less loaded. In this paper we extend the result to other classes of schedulers and to a more accurate representation of the latency, showing that, despite the increase in complexity, the problem is still efficiently solvable, even when admission control needs to be factored in, for realistic instances, provided that the right modeling choices are made. |

Frangioni, Antonio and Galli, Laura and Stea, Giovanni
In a network where weighted fair-queueing schedulers are used at each link, worst-case end-to-end delays can be inferred from per-link rate reservations. Therefore, it is also possible to compute resource-constrained paths that meet target delay constraints, and optimize some key performance metrics (e.g., minimize the overall reserved rate, maximize the remaining capacity at bottleneck links, etc.). Despite the large amount of literature on QoS routing appeared since the mid '90s, few papers so far have discussed solving such path computation problem at optimality in general settings. In this paper, we formulate and solve the optimal path computation and resource allocation problem assuming different types of weighted fair-queueing schedulers in the network. We show that, depending on the scheduling algorithm, routing a new flow may or may not affect the performance of existing flows; hence, explicit admission control constraints may be required to ensure that existing flows still meet their deadline afterwards. Yet, for the relevant schedulers proposed in the literature the problem can be formulated as a Mixed-Integer Second-Order Cone problem (MI-SOCP), and can be solved at optimality in split-second times even in fairly large networks. |

Bigi, Giancarlo and Pappalardo, Massimo and Passacantando, Mauro
The paper deals with the gap function approach for equilibrium problems with locally Lipschitz data. The gap function inherits the locally Lipschitz continuity of the data. Hence, the connections between its generalized directional derivatives, monotonicity conditions on the equilibrium bifunction and descent properties can be analysed. In turn, this analysis leads to devise two descent methods. Finally, the results of preliminary numerical tests arereported. |

Brogi, Antonio and Canciani, Andrea and Soldani, Jacopo
Managing complex applications over heterogeneous clouds is one of the emerging problems in the cloud era. The OASIS Topology and Orchestration Specification for Cloud Applications (TOSCA) aims at solving this problem by providing a language to describe and manage complex cloud applications in a portable and vendor-agnostic way. TOSCA permits to define an application as an orchestration of components, whose types can specify states, requirements, capabilities and management operations --- but not how they interact with each other. In [1][2], we already discussed a simple extension of TOSCA that permits describing the behaviour of a component's management operations and their relations with its states, requirements, and capabilities. The objective of this short report is to show how to enrich the TOSCA modelling language to provide such extension. |

Del Corso, Gianna M. and Menchi, Ornella and Romani, Francesco
Download (507Kb) |

Mukala, Patrick and Cerone, Antonio and Turini, Franco
Evidence suggests that Free/Libre Open Source Software (FLOSS) environ-ments provide unlimited learning opportunities. Community members engage in a number of activities both during their interaction with their peers and while mak-ing use of the tools available in these environments. A number of studies docu-ment the existence of learning processes in FLOSS through the analysis of sur-veys and questionnaires filled by FLOSS project participants. At the same time, the interest in understanding the dynamics of the FLOSS phenomenon, its popu-larity and success resulted in the development of tools and techniques for extract-ing and analyzing data from different FLOSS data sources. This new field is called Mining Software Repositories (MSR). In spite of these efforts, there is limited work aiming to provide empirical evidence of learning processes directly from FLOSS repositories. In this paper, we seek to trigger such an initiative by proposing an approach based on Process Mining to trace learning behaviors from FLOSS participants’ trails of activities, as recorded in FLOSS repositories, and visualize them as pro-cess maps. Process maps provide a pictorial representation of real behavior as it is recorded in FLOSS data. Our aim is to provide critical evidence that boosts the understanding of learning behavior in FLOSS communities by analyzing the rel-evant repositories. In order to accomplish this, we propose an effective approach that comprises first the mining of FLOSS repositories in order to generate Event logs, and then the generation of process maps, equipped with relevant statistical data interpreting and indicating the value of process discovery from these reposi-tories. |

Soldani, Jacopo and Binz, Tobias and Breitenbücher, Uwe and Leymann, Frank and Brogi, Antonio
To fully appreciate cloud computing powers, design and development of cloud applications should be eased and supported. The OASIS TOSCA standard enables developers to design and develop cloud applications by specifying their topologies as orchestrations of typed nodes and relationships.However, building such application topologies often results in reinventing the wheel multiple times when similar solutions are manually created for different applications by different developers having the same requirements. Thus, the reusability of existing TOSCA solutions is crucial to ease and support design and development processes. In this paper, we tackle this issue. We introduce TOSCA-MART, a method that enables deriving valid implementations for custom components from a repository of complete and validated cloud applications. The method enables developers to specify individual components in their application topologies, and illustrates how to match, adapt, and reuse existing (fragments of) applications to implement these components while fulfilling all their compliance requirements. We also characterize and validate TOSCA-MART by means of a prototypical implementation based on an open source toolchain and a case study. |

Bartoloni, Leonardo and Brogi, Antonio and Ibrahim, Ahmad
The ability to a priori predict the QoS of a service orchestration is of pivotal importance both for the design of service compositions and for the definition of their SLAs. In this paper we present an algorithm to probabilistically predict the QoS of WS-BPEL service orchestrations. Our algorithm employs Monte Carlo simulations and it improves previous approaches by coping with complex dependency structures, unbound loops, fault handling, and unresponded service invocations. A proof-of-concept implementation of the algorithm in F# is described. |

van Ackooij, Wim and Frangioni, Antonio and de Oliveira, Welington
Motivated by a class of chance-constrained optimization problems, we explore modifications of the (generalized) Benders' decomposition approach. The chance-constrained problems we consider involve a random variable with an underlying discrete distribution, are convex in the decision variable, but their probabilistic constraint is neither separable nor linear. The variants of Benders' approach we propose exploit advances in cutting-plane procedures developed for the convex case. Specifically, the approach is stabilized in the two ways; via a proximal term/trust region in the L1 norm, or via a level constraint. Furthermore, the approaches can use inexact oracles, in particular informative on-demand inexact ones. The simultaneous use of the two features requires a nontrivial convergence analysis; we provide it under what would seem to be the weakest possible assumptions on the handling of the two parameters controlling the oracle (target and accuracy), strengthening earlier know results. Numerical performance of the approaches are assessed on a class of hybrid robust and chance-constrained conic problems. The numerical results show that the approach has potential, especially for instances that are difficult to solve with standard techniques. |

Scutellà, Maria Grazia
We show how the color-coding method by Alon, Yuster and Zwick, in its version specialized to compute simple paths of a specified cardinality k, can be extended to address the presence of arc lengths (the existence of such an extension was outlined by the authors for a more general color-coding algorithm, but it was not explicitely described, and its time complexity was not discussed). The extension is then used to derive approximation results for the general longest path problem. |

Ambrosino, D. and Scutellà, Maria Grazia
We study some complex distribution network design problems, which involve facility location, warehousing, transportation and inventory decisions. Several realistic scenarios are investigated. Two kinds of mathematical programming formulations are proposed for all the introduced problems, together with a proof of their correctness. Some formulations extend models proposed by Perl and Daskin (1985) for some warehouse location-routing problems; other formulations are based on flow variables and constraints. |

Grossi, Roberto and Italiano, Giuseppe F.
Due to a printing problem, the revised version of our paper (Efficient cross-trees for external memory, in ``External Memory Algorithms and Visualization'', James Abello and Jeffrey Scott Vitter eds., DIMACS Series in Discrete Mathematics and Theoretical Computer Science, American Mathematical Society Press, Providence, RI, 1999) has been replaced by the (shorter) submitted version. In this technical report, we include the revised version that has not been published by mistake. In particular, we describe efficient methods for organizing and maintaining large multidimensional data sets in external memory. This is particular important as access to external memory is currently several order of magnitudes slower than access to main memory, and current technology advances are likely to make this gap even wider. We focus particularly on multidimensional data sets which must be kept simultaneously sorted under several total orderings: these orderings may be defined by the user, and may also be changed dynamically by the user throughout the lifetime of the data structures, according to the application at hand. Besides standard insertions and deletions of data, our proposed solution can perform efficiently {\em split\/} and {\em concatenate\/} operations on the whole data sets according to any ordering. This allows the user: (1)~to dynamically rearrange any ordering of a segment of data, in a time that is faster than recomputing the new ordering from scratch; (2)~to efficiently answer queries related to the data contained in a particular range of the current orderings. Our solution fully generalizes the notion of B-trees to higher dimensions by carefully combining space-driven and data-driven partitions. Balancing is easy as we introduce a new multidimensional data structure, {\em the cross-tree}, that is the cross product of balanced trees. As a result, the cross-tree is competitive with other popular index data structures that require linear space (including k-d trees, quad-trees, grid files, space filling curves, hB-trees, and R-trees). |

Frangioni, Antonio and Necciari, Emiliano and Scutellà, Maria Grazia
We propose new local search algorithms for minimum makespan machine scheduling problems, which perform multiple exchanges of jobs among machines. Inspired by the work of Thompson and Orlin (1989) on cyclic transfer neighborhood structures, we model multiple exchanges of jobs as special disjoint cycles and paths in a suitably defined improvement graph, by extending definitions and properties introduced in the context of VRP (Thompson and Psaraftis, 1993) and of CMST (Ahuja, Orlin and Sharma, 1998). Several algorithms for searching the neighborhood are suggested. We report the results of a wide computational experimentation, on different families of benchmark instances, performed for the case of identical machines. The minimum makespan machine scheduling problem with identical machines has been selected as a case study to perform a comparison among the alternative algorithms, and to discover families of instances for which the proposed neighborhood may be promising in practice. The obtained results are very interesting. On some families of instances, which are very hard to solve exactly, the most promising multi-exchange algorithms proved to dominate, in gap and in time, competitive benchmark heuristics. |

Frangioni, Antonio
We study a class of generalized bundle methods where the stabilizing term can be any closed convex function satisfying certain properties. This setting covers several algorithms from the literature that have been so far regarded as distinct. Under different hypothesis on the stabilizing term and/or the function to be minimized, we prove finite termination, asymptotic convergence and finite convergence to an optimal point, with or without limits on the number of serious steps and/or requiring the proximal parameter to go to infinity. The convergence proofs are conceived for leaving a high degree of freedom in the crucial implementative features of the algorithm, i.e., the management of the bundle of subgradients (b-strategy) and of the proximal parameter (t-strategy). We extensively exploit a dual view of bundle methods, which are shown to be a dual ascent approach to one nonlinear problem in an appropriate dual space, where nonlinear subproblems are approximately solved at each step with an inner linearization approach. This allows to precisely characterize the changes in the subproblems during the serious steps, since the dual problem is not tied to the local concept of e-subdifferential. For some of the proofs, a generalization of inf-compactness, called *-compactness, is required; this concept is related to that of asymptotically well-behaved function. |

Del Corso, Gianna M. and Romani, Francesco
In this paper we present heuristic techniques for the reduction of the bandwidth of a sparse matrix as well as for the reduction of the cost of the associated Cholesky factorization. Our algorithms are inspired by the Spectral Method of Barnard, Pothen and Simon [BPS95] which derives a permutation for reducing the envelope-size of a sparse matrix by computing the second eigenvector of the associated Laplacian matrix. Two main modifications of that method are proposed and tested. The first is based on the experimental observation that it is often preferable to perform only few iterations of an iterative method converging to the second eigenvector; the second is the introduction of a weighted Laplacian. These simple ideas allows us to obtain a family of spectral methods that have been carefully tested on a set of matrices whose size range from few hundred to one-million. |

Ferragina, Paolo and Muthukrishnan, S.
Consider the following two problems. Problem 1. We are given a collection of 2-tuples T={T_1,...,T_k} where each T_i=(T_i^1,T_i^2), for two strings T_i^1 and T_i^2. This collection may be preprocessed. A query takes as input a pattern tuple P=(P^1,P^2) and the problem is to determine all tuples T_i for which P^1 is a substring of T_i^1 and P^2 is a substring of $T_i^2. Problem 2. We are given a collection of documents D={D_1,...,D_k} that may be preprocessed. Each query is a pair of pattern strings p_1 and p_2, and the goal is to determine the set of all documents that contain both patterns as substrings. In this paper, we show that these two problems are related and make progress on solving them efficiently by proposing the first non trivial results. |

Bigi, Giancarlo and Castellani, Marco
The uniqueness of the KKT multipliers of a nonlinear program has been studied in a well-known paper by Kyparisis. In the first part of this note, we show that the characterization obtained in that paper does not provide a satisfactory result for the multiobjective case. Thus, we introduce a new regularity condition, which involves also the objective functions, and we show that it is necessary and sufficient in order to have unique KKT multipliers. Moreover, we use this condition to refine a second order necessary optimality condition, which has been obtained in a recent paper. |

Bellia, Marco and Occhiuto, Maria Eugena
MGU(k-depth) is the family of problems, which contains, for each natural k, MGU restricted to inputs with depth no greater than k. MGU(k-breadth) is the family which, for each natural k, contains MGU restricted to inputs with breadth no greater than k. We prove that, for each k>1, MGUK(k-depth) equivalent (in LogSpace reduction) to MGU, and also MGU(k-breadth) equivalent (in LogSpace reduction) to MGU. This shows that bounds either on the depth or on the breadth of the input do not significantly speed up the computation of unification on PRAM machines. The paper concludes with a look at the design of parallel unification algorithms, and at unification on flat inputs. As a final contribution, we introduce a new algorithm which improves the performance of known algorithms for parallel unification and also computes in polylog PRAM Time on flat inputs. |

Prencipe, Giuseppe
The distributed coordination and control of a set of autonomous, mobile agents/ robots is a widely studied topic in a variety of fields, such as engineering, artificial intelligence, and artificial life. In the majority of the studies, the problem has been approached from an empirical point of view. A different approach to the problem has been introduced in [6], where the authors analyzed the distributed coordination and control of a set of autonomous, mobile robots from a computational point of view. With this purpose, they defined a model where the world was inhabited by a set of totally autonomous, mobile, memoryless entities that were requested to form a generic pattern. From that study, it turned out that whether or not such a task could be accomplished depended on the amount of common knowledge the robots had; in particular, on the direction and orientation of the local coordinate system. Among the issues left open was which kind of patterns a set of such mobile units can form when they are even in number and agree on the orientation and direction of only one axis. This paper answers that question. |

Bruni, Roberto and de Frutos-Escrig, D. and Marti-Oliet, N. and Montanari, Ugo
The definition of SOS formats ensuring that bisimilarity on closed terms is a congruence has received much attention in the last two decades. For dealing with open system specifications, the congruence is usually lifted from closed terms to open terms by instantiating the free variables in all possible ways; the only alternatives considered in the literature relying on Larsen and Xinxin's context systems and Rensink's conditional transition systems. We propose a different approach based on tile logic, where both closed and open terms are managed analogously. In particular, we analyze the `bisimilarity as congruence' property for several tile formats that accomplish different concepts of subterm sharing. |

Spadafora, Ippolito
Abstract La crisi di dentita'in atto nella nostra societa'dovuta a motivi politici ed economici, e' stata ulteriormente evidenziata nel processo contemporaneo di costruzione dell'identita' nella cultura della simulazione; le processo va inserito nel contesto piu' ampio di confronto, ed ormai di lenta ma continua erosione dei confini tra il reale e il virtuale, l'animato e l'inanimato,l'io unitario e l'io multiplo. Vengono esposte queste nuove idee sull'identita' umana che riguardano l'insieme dei concetti raccolti sotto il nome di "Postmodernità": concetti quali decentrato, fluido, opaco, non lineare. Si traccia una breve storia degli argomenti dell'informatica che hanno avuto un impatto maggiore con la nostra cultura, quali Intelligenza artificiale, Vita artificiale, Internet. Non si entra nei dettagli tecnici di tali tematiche, ma si cerca di approfondire le idee ed i concetti su cui si basano, e le conseguenze che hanno avuto nel pensiero umano e nella cultura della vita quotidiana. Si considerano cioe' i vari argomenti e materie di ricerca che hanno contribuito alla nascita e sviluppo di un nuovo filone di studio che prende nome di "Filosofia del computer". |

Nguyen, S. and Pallottino, Stefano and Scutellà, Maria Grazia
Shortest path problems are among the most studied network flow problems, with interesting applications in various fields. Mainly in large scale transportation systems, a sequence of shortest path problems needs often to be solved, where the (k+1)-th problem differs only slightly from the k-th one. In that context, a promising line of research is to study efficient reoptimization approaches, able to exploit useful information from one shortest path computation to the next one. In this work, we shall focus on the change of the origin node in the shortest path problem to be solved. After reviewing the classical (dual) algorithms described in the literature sofar, which essentially show a Dijkstra's like behavior, a new dual approach will be proposed, which could be particularly promising in practice. |

Ferragina, Paolo and Manzini, Giovanni
There is an upsurging interest in designing succinct data structures for basic searching problems (see [Munro99] and references therein). The motivation has to be found in the exponential increase of electronic data nowadays available which is even surpassing the significant increase in memory and disk storage capacities of current computers. Space reduction is an attractive issue because it is also intimately related to performance improvements as noted by several authors (e.g. Knuth [knuth3], Bentley [bentley]). In designing these implicit data structures the goal is to reduce as much as possible the auxiliary information kept together with the input data without introducing a significant slowdown in the final query performance. Yet input data are represented in their entirety thus taking no advantage of possible repetitiveness into them. The importance of those issues is well known to programmers who typically use various tricks to squeeze data as much as possible and still achieve good query performance. Their approaches, thought, boil down to heuristics whose effectiveness is witnessed only by experimentation. In this paper, we address the issue of compressing and indexing data by studying it in a theoretical framework. We devise a novel data structure for indexing and searching whose space occupancy is a function of the entropy of the underlying data set. The novelty resides in the careful combination of a compression algorithm, proposed by Burrows-Wheeler [bw], with the structural properties of a well known indexing tool, the Suffix Array [MM93]. We call the data structure opportunistic since its space occupancy is decreased when the input is compressible at no significant slowdown in the query performance and without any assumption on a particular fixed distribution. More precisely, its space occupancy is optimal in a information-content sense because a text $T[1,u]$ is stored using $O(k(T)) + o(1)$ bits per input symbol, where $k(T)$ is the $k$th order entropy of $T$ (the bound holds for any fixed $k$). Given an arbitrary string $P[1,p]$, the opportunistic data structure allows to search for the $occ$ occurrences of $P$ in $T$ requiring $O(p + occ \log^\epsilon u)$ time complexity (for any fixed $\epsilon >0$). If data are non compressible, then we achieve the best space bound currently known [GV00]; otherwise our solution improves the succinct suffix array in [GV00] and the classical suffix tree and suffix array data structures either in space or in query time complexity or both. It was a belief [Witten:1999:MGC] that some space overhead should be paid to use full-text indices (i.e. suffix trees or suffix arrays) with respect to the word-based indices (i.e. inverted lists). The results in this paper show that no space overhead is needed at all, and as an application we improve space and query time complexity of the well-known Glimpse tool [glimpse]. We finally investigate the modifiability of our opportunistic data structure by studying how to choreograph its basic ideas with a dynamic setting thus achieving effective searching and updating time bounds. |

Corradini, Andrea and Gadducci, Fabio and Kahl, W.
Multi-algebras allow to model nondeterminism in an algebraic framework by interpreting operators as functions from individual arguments to sets of possible results. Starting from a functorial presentation of multi-algebras based on "gs-monoidal theories" we argue that specifications for multi-algebras should be based on the notion of term graphs instead of on standard terms. We consider the simplest case of (term graph) equational specification, showing that it enjoys an unrestricted form of substitutivity. We discuss the expressive power of equational specification for multi-algebras, and we sketch possible extensions of the calculus. |

Attardi, Giuseppe and Simi, Maria and Tanganelli, Filippo and Tommasi, Alessandro
In this work we propose a model to learn conceptual descriptions of categories from precategorized texts. The model is general and parametric, and it captures most of the statistical approaches to classification as well as allowing the definition of more symbolic learning schemes. The algorithm scheme has been instantiated into three different algorithms, which have been implemented and tested on a collection of documents obtained from the Web. As a possible application of the descriptions obtained, classification was done on a test set. Results are somewhat surprising, and stand in contrast with most experiments done in literature, possibly giving hints about a different research direction. |

Baldan, Paolo and Busi, N. and Corradini, Andrea and Pinna, G.M.
We propose a functorial concurrent semantics for Petri nets extended with read and inhibitor arcs, that we call inhibitor nets. Along the lines of the seminal work of Winskel on safe nets and extending a previous work in [1] for nets with read arcs, the truly concurrent semantics is given at a categorical level via a chain of functors leading from the category SW-IN of semi-weighted inhibitor nets to the category Dom of finitary prime algebraic domains (equivalent to the category PES of prime event structures). As an intermediate semantic model, we introduce inhibitor event structures, an extension of prime event structures able to faithfully capture the dependencies among events which arise in the presence of read and inhibitor arcs. [1] P. Baldan, A. Corradini, U. Montanari "An Event Structure Semantics for P/T Contextual Nets: Asymmetric Event Structures" FoSSaCS '98 Conference Proceedings, Springer LNCS vol. 1378, 1998, pp. 63-80. |

Ciriani, Valentina
Recently introduced pseudocubes and SPP forms have made possible to represent Boolean functions with a more compact expression than in the SP form. In this report we present new properties of pseudocubes. In order to study and manipulate pseudocubes we count and characterize sub-pseudocubes. Letting P be a pseudocube, we derive a normal form and the canonical expression of any sub-pseudoproduct of P. |

Frangioni, Antonio and Scutellà, Maria Grazia and Necciari, Emiliano
We address the problem of scheduling independent jobs to identical parallel machines in order to minimize the completion time (makespan). This is a hard problem, for which both constructive and improvement heuristics have been proposed. We describe new local search algorithms, whose neighborhood structure is based on multiple exchanges of jobs among machines. Inspired by the work of Thompson and Orlin (1989) on cyclic transfer neighborhood structures, in these algorithms multiple exchanges are modelled in terms of ``disjoint cycles'' on suitably defined improvement graphs, and improvement moves are obtained by heuristically constructing special disjoint cycles in such auxiliary graphs. The algorithms have been tested on benchmark instances, characterized by uniformly distributed processing times, and on a new class of "highly non-uniform" instances, generated by the authors, which seem to be, in some cases, more difficult than the "uniform" ones. The algorithms have shown a potential both for generating highly accurate solutions when the running time is not an issue, and for rapidly obtaining good solutions when the running time has to be kept small. |

Semini, Laura and Montangero, Carlo
The purpose of this work is to establish some foundational ground for the logics that can be used to reason on distributed systems where asyncronous message passing is the only possible mechanism for communication, e.g. distributed workflow systems. We define adtl, a logic with modalities for time and locality. We introduce a complete and decidable axiom system, which characterizes the class of structures that reflect the nature of the causal relation when communications are asynchronous. |

Ruggieri, Salvatore
We present an analytic evaluation of the run-time behavior of the C4.5 algorithm which highlights some efficiency improvements. We have implemented a more efficient version of the algorithm, called EC4.5, that improves on C4.5 by adopting the best among three strategies at each node construction. The first strategy uses a binary search of thresholds instead of the linear search of C4.5. The second strategy adopts a counting sort method instead of the quicksort of C4.5. The third strategy uses a main-memory version of the RainForest algorithm for constructing decision trees. Our implementation computes the same decision trees as C4.5 with a performance gain of up to 5 times. |

Baldan, Paolo and Corradini, Andrea and Montanari, Ugo
We present an event structure semantics for contextual nets, an extension of P/T Petri nets where transitions can check for the presence of tokens without consuming them (read-only operations). A basic role is played by asymmetric event structures, a generalization of Winskel's prime event structures where symmetric conflict is replaced by a relation modelling asymmetric conflict or weak causality, used to represent a new kind of dependency between events arising in contextual nets. Extending Winskel's seminal work on safe nets, the truly concurrent event based semantics of contextual nets is given at categorical level via a chain of coreflections leading from the category SW-CN of semi-weighted contextual nets to the category DOM of finitary prime algebraic domains. First an unfolding construction generates from a contextual net a corresponding occurrence contextual net, from where an asymmetric event structure is extracted. Then the configurations of the asymmetric event structure, endowed with a suitable order, are shown to form a finitary prime algebraic domain. We also investigate the relation between the proposed unfolding semantics and several deterministic process semantics for contextual nets in the literature. In particular, the domain obtained via the unfolding is characterized as the collection of the deterministic processes of the net endowed with a kind of prefix ordering. |

Ciriani, Valentina
The new concept of pseudo-product as extension of the concept of product has been introduced in a recent paper by Luccio and Pagli. An arbitrary Boolean function can be expressed as a sum of pseudo-products (SPP), and this expression is, in general, shorter then the SP form. We present new algebraic SPP-expressions for the output functions of parallel integer multipliers based on Wallace Trees and column compression techniques. |

Bellia, Marco and Occhiuto, Maria Eugena
We present an algorithm which exploits a new approach to the problem of finding the connected components of an undirected graph, CCug for short, with v vertices and e edges. The algorithm has depth O(log**2(e)) (where log is the logarithm whose base is 2) on a CREW PRAM using e processors, hence its cost is not affected by the number v of graph vertices. This makes the algorithm the one with best speedup and best cost for CCug on highly sparse graphs. On dense graphs conversely, its performance is comparable to the one of the algorithm defined by Han and Wagner in 1990 and a little worse than the one defined by Chong and Lam in 1995. A variant of the algorithm with the same bound but running on the EREW model is also included. The algorithm can be used to find the transitive closure of binary, symmetric relations. In this case e is the number of axioms and v is the range of the relation. |

Danelutto, Marco
We propose a new implementation model for structured parallel programming models based on skeletons. The model is aimed at overcoming some of the problems observed in traditional, template based implementations of skeleton languages, while preserving all the positive features of structured parallel programming models. The proposed implementation model does not rely on static process networks like the template based implementation model for skeletons. It rather implements skeleton programs by properly scheduling macro data-flow instructions (corresponding to the sequential portions of code appearing in the skeleton code) to a set of macro data-flow interpreters running on the processing elements of the target architecture. We discuss some preliminary experimental results demonstrating the feasibility and effectiveness of our approach. |

Nonato, Maddalena and Pallottino, Stefano and Xuewen, B.
This paper presents in a unified framework the most efficient shortest path algorithms of the List class to highlight strategies for handling the candidate node set as well as issues related to the efficient implementation of such strategies. A deep analysis of the strategies which inspired two of the most robust algorithms for the shortest path tree problem, due to Tarjan [27] and to Goldberg and Radzik [18], allows the proposal of some variants. A computational study of such variants is conducted on the same test problem families used in [6], in order to validate "a posteriori" the implementation choices. The experimental data suggest that all of the new variants are robust in practice and, for each of the tested problem families, at least one of the new variants either improves the performance or performs as well as both benchmark algorithms. |

Bettina, Klinz and Scutellà, Maria Grazia
We show that the Balanced Network Flow Problem in the case of general weights, i.e., the problem of finding a feasible flow on a given network which minimizes the difference between the maximum and the minimum weighted flow on single arcs, can be solved in strongly polynomial time. The proposed solution algorithm consists of extending Megiddo's approach for parametric programming. |

Ahuja, Ravindra K. and Orlin, James B. and Pallottino, Stefano and Scutellà, Maria Grazia
We shall investigate minimum time and minimum cost path problems in street networks regulated by traffic lights. We shall show that the minimum time path problem is polynomially solvable. On the other hand, in general minimum cost path problems are NP-hard. Particular but realistic cases which are polynomially solvable will be discussed. |

Cappanera, Paola
Obnoxious location models are models in which customers no longer consider the facility desirable and try to have it as close as possible to their own location, but instead avoid the facility and stay away from it. Typical applications are optimal locations of nuclear reactors, garbage dumps, or water purification plants. This work presents a survey of mathematical models for undesirable location problems in the plane and particularly on networks; solution procedures are briefly described. A brief review of extensive (obnoxious) facility location problems in networks is also given. Finally critical aspects of existing models are identified and some directions for future search are suggested. |

Serra Capizzano, Stefano and Tilli, P.
Starting from the Finite Difference discretization of a general second order partial differential equation (PDE), we introduce the notion of generalized Locally Toeplitz sequence of matrices. The singular value distribution (the eigenvalue distribution in the Hermitian case)is studied and characterized for generalized Locally Toeplitz sequences in terms of weighted multidimensional Szego" formulas which extend previous results due to the second author and concerning the unilevel case. The application of this theoretic analysis to the numerical solution of elliptic PDEs is finally discussed. |

Semini, Laura and Montangero, Carlo
It is fairly accepted that the realization of complex systems must be accomplished step by step from the initial specification, through a sequence of intermediate phases, to the final program. These development steps, linking a preliminary version, or description, of the program to a more detailed one, are usually called refinement steps, while the intermediate stages of a refinement process are called levels of abstraction. A refinement calculus is a means to support this modus operandi in program development, allowing to link different levels of abstraction: it introduces a precise relation between intermediate descriptions, and the rules to check whether the relation is satisfied. Tuple space languages are concurrent languages, that foster the definition of autonomous entities of computation (the processes), and offer mechanisms for their synchronization and communication. In particular, they represent one of the most acknowledged models of coordination. Tuple space languages are based on the idea that a dynamic collection of tuples can act as shared state of concurrent processes, and play the role of coordination media among them. To build a refinement calculus for tuple spaces, we address three points, in this paper: 1) We single out a specification language, a variation of first order temporal logic. Temporal relations between propositional formulae are not expressive enough to describe relations between tuple spaces, which are multisets of atoms. The specification language, called Oikos-tl, includes three new temporal operators that enhance the expressive power of the logic, permitting to directly link state transitions and state configurations. The semantics of the specification language is formally defined, and a set of useful properties for refinement are shown. 2) We introduce a reference language for tuple spaces, dubbed TuSpRel, and define its axiomatic and operational semantics. We need the former to derive properties, the latter to describe the allowed computations of a system. We relate these descriptions, and guarantee that using the axiomatic semantics we can derive properties, which are correct and complete with respect to the operational behaviour. The non-deterministic features of tuple space languages make this result new, and more complex than in other programming paradigms. One of the contributions of our work is the idea to derive weakest preconditions exploiting the demonic strict choice in non-deterministic selection. The transition system defining the operational semantics is based on the new notion of enabling precondition, which exploits the angelic strict choice. 3) To build the refinement calculus, we take a compositional approach. We first consider the basic statements of the language, and say under which conditions they satisfy a property, then compose these proofs to derive that a system refines a specification. Finally, in the refinement calculus definition, we extend to tuple space languages the ability to exploit logic formulae to specify the behaviour of unrefined modules: in the intermediate steps, a system is only partially written in the programming language, and the still unrefined features are described by logical formulae. |

Semini, Laura and Montangero, Carlo
The purpose of this work is to establish some foundational ground for the logics that can be used to reason on distributed systems communicating asyncronously via message passing, e.g. distributed workflow systems. These systems rule out synchronized local clocks, and reasoning on global time or state: message passing is the only possible mechanism for communication, and therefore for causality across components. We introduce {\em adtl}, a logic with modalities for time and locality. We show that its axiom system characterizes the class of structures that reflect the nature of the causal relation when communication is asyncronous. Correspondingly, the axioms are such that any attempt to formulate in the logic properties about the unspeakable entities, like a global state, results in a contradiction. |

Aringhieri, R. and Hansen, P. and Malucelli, Federico
The paper addresses the problem of computing the Hyper-Wiener index for molecules in the alkanes family. A new algorithm having linear computational complexity is proposed. The algorithm has optimal complexity, and can be extended to the family of all the acyclic molecular structures. Some computational results are reported. |

Aringhieri, R. and Hansen, P. and Malucelli, Federico
The need to generate graphs representing chemical structures suggested the use of computer and as a consequence the delevopment of efficient enumerating algorithms. This paper considers the isomeric acyclic structures enumeration focusin on the alkane molecular family. Two new algorithms based on Reverse Search are presented. The main concern of this paper is to highlight the importance of the asymptotic complexity and running time behaviour in the analysis of enumeration algorithms. |

Farinaccio, Fernanda and Gallo, Giorgio and Sandi, Claudio
In questa nota vengono elencate e brevemente riassunte le tecniche disponibili per misurare l'efficienza di sistemi predisposti alla produzione di beni e servizi. Lo scopo è quello di valutare l'efficacia di tali tecniche, in particolare per la valutazione di sistemi di trasporto finalizzati, più che al profitto, a scopi di pubblica utilità. Sono analizzate in dettaglio due tecniche di valutazione di efficienza diventate ormai operative, note con le denominazioni di metodi parametrici e metodi non parametrici. Infine vengono riportate applicazioni di tali tecniche al settore del trasporto pubblico. |

Frangioni, Antonio and Serra Capizzano, Stefano
In this paper, we extend some general results on linear positive operators to the case of matrix-valued operators taking values in certain spaces of Hermitian matrices. These results are then used to show that certain matrices are optimal preconditioners for the linear systems arising in Interior Point methods for Linear Programs. In the case of graph matrices, corresponding to Min Cost Flow problems, we are able to find superlinear preconditioners for a class of ``local'' graphs generalizing the grid graphs commonly used in applications. For general graphs, we are able to give a spectral characterization of the preconditioners that may have a theoretical interest in its own. |

Frangioni, Antonio and Pretolani, Daniele and Scutellà, Maria Grazia
We propose some techniques, based on minimum cost flow computations, for efficiently computing lower bounds for the Capacitated Minimum Spanning Tree problem (CMST). With respect to other methods proposed in the literature, our approaches often provide comparable bounds, with a substantially lower computational cost. For this reason, and for other features discussed in the paper, the proposed methods may be particularly promising in the context of exact algorithms for CMST. |

Farinaccio, Fernanda and Ostanello, Anna
Some questions regarding the possibility of considering DEA as a MCDM tool have been put in literature. Some of the answers, however, areproblematic more than "resolutive" . The multiplicity of "criteria" or "objectives" involved in a DEA model, does not necessarily mean that such a kind of representation might answer some questions, which concern the preferences on a given candidate (or alternative) set, as other MCDM tools can do validly, both conceptually and formally. A few methodological considerations and research proposal regarding these aspects shall be presented in the paper. In particular, the importance of clearly defining the "Decision Maker" figure is underlined to distinguish situations in which an individual, rather than a group of judges, is involved in the evaluation and ranking of candidate set. |

Alessi, F. and Baldan, Paolo and Honsell, F.
In this paper we introduce SFPM, a category of SFP domains which provides very satisfactory domain-models, i.e. partializations, of separable Stone spaces (2-Stone spaces). More specifically, SFPM is a subcategory of SFPep, closed under direct limits as well as many constructors, such as lifting, sum, product and Plotkin powerdomain. SFPM is structurally well behaved, in the sense that the functor MAX, which associates to each object of SFPM the Stone space of its maximal elements, is compositional with respect to the constructors above, and omega-continuous. A correspondence can be established between these constructors over SFPM and appropriate constructors on Stone spaces, whereby SFP domain-models of Stone spaces defined as solutions of a vast class of recursive equations in 2-Stone, can be obtained simply by solving the corresponding equations in SFPM. Moreover any continuous function between two 2-Stone spaces can be extended to a continuous function between any two SFPM domain-models of the original spaces. The category SFPM does not include all the SFP's with a 2-Stone space of maximal elements (CSFP's). We show that the CSFP's can be characterized precisely as suitable retracts of SFPM objects. Then the results proved for SFPM easily extends to the wider category having CSFP's as objects. Using SFPM , we explain two classical partializations of the space of finitary hypersets (the hyperuniverse N-omega [Forti, Honsell, Lenisa]) based on SFP domains (see [Abramsky], [Mislove, Moss, Oles]). We also show that these two domains are not isomorphic, thus providing a negative answer to a problem raised in [Mislove, Moss,Oles]. |

Crescenzi, Pierluigi and Dardini, Leandro and Grossi, Roberto
The IP address lookup problem is one of the major bottlenecks in high performance routers. Previous solutions to this problem first describe it in the general terms of longest prefix matching and, then, are experimented on real routing tables $T$. In this paper, we follow the opposite direction. We start out from the experimental analysis of real data and, based upon our findings, we provide a new and simple solution to the IP address lookup problem. More precisely, our solution for $m$-bit IP addresses is a reasonable trade-off between performing a binary search on $T$ with $O(\log |T|)$ accesses, where $|T|$ is the number of entries in $T$, and executing a single access on a table of $2^m$ entries obtained by fully expanding $T$. While the previous results start out from space-efficient data structures and aim at lowering the $O(\log |T|)$ access cost, we start out from the expanded table with $2^m$ entries and aim at compressing it without an excessive increase in the number of accesses. Our algorithm takes \emph{exactly three} memory accesses and occupies $O(2^{m/2} + |T|^2)$ space in the worst case. Experiments on real routing tables for $m=32$ show that the space bound is overly pessimistic. Our solution occupies approximately one megabyte for the MaeEast routing table (which has $|T|\approx 44,000$ and requires approximately 250 KB) and, thus, takes three \emph{cache} accesses on any processor with 1 MB of L2 cache. According to the measurement obtained by the VTune tool on a Pentium II processor, each lookup requires 3 additional clock cycles besides the ones needed for the memory accesses. Assuming a clock cycle of 3.33 nanoseconds and an L2 cache latency of 15 nanoseconds, search of MaeEast can be estimated in 55 nanoseconds or, equivalently, our method performs 18 millions of lookups per second. |

Gallo, Giorgio and Scutellà, Maria Grazia
We address a generalization of graphs, the directed hypergraphs, and show that they are a powerful tool in modelling and solving several relevant problems in many application areas. Such application areas include Linear Production Problems, Flexible Manufacturing Systems, Propositional Logic, Relational Databases, and Public Transportation Systems. |

Boraso, M. and Montangero, Carlo and Sedehi, H.
The accurate prediction of software development costs may have a large economic impact. As a consequence, considerable research attention is now directed to understand better the software development process. The objective of this paper is to provide an experimental evaluation of the applicability, universality, and accuracy of some algorithmic software cost estimating models (COCOMO, TUCOMO, PUTNAM, COPMO, ESSE, and Function Points). Data on nine Italian Management Information Systems projects were collected and used to evaluate the performance of the models. The evaluation of the estimates was based on the Mean Magnitude Relative Error and Prediction at level 25% criteria. Results indicated that the models provided interesting performances, better if recalibrated with local data. |

Montangero, Carlo and Semini, Laura
We introduce Oikos_adtl, a specification language for distributed systems based on asynchronous communication via remote writings. The language is designed to support the composition of specifications. It allows expressing the global properties of a system in terms of the local properties of the components and of coordination patterns. Oikos_adtl is based on an asynchronous, distributed, temporal logic, which extends Unity to deal with components and events. We present the specification language and its semantics, introduce a number of compositionality theorems, and discuss some coordination patterns. A fragment of a standard case study is used to validate pragmatically the approach, with respect to expressiveness and work-ability. |

Nguyen, S. and Pallottino, Stefano and Malucelli, Federico
This paper presents a new graph theoretic framework for the passenger assignment problem that encompasses simultaneously the departure time and the route choice. The implicit FIFO access to transit lines is taken into account by the concept of available capacity. This notion of flow priority has not been considered explicitly in previous models. A traffic equilibrium model is described and a computational procedure based on asymmetric boarding penalty functions is suggested. |

Montanari, Ugo and Pistore, Marco
In this paper we present history-dependent automata (HD-automata in brief). They are an extension of ordinary automata that overcomes their limitations in dealing with history-dependent formalisms. In a history-dependent formalism the actions that a system can perform carry information generated in the past history of the system. The most interesting example is pi-calculus: channel names can be created by some actions and they can then be referenced by successive actions. Other examples are CCS with localities and the history-preserving semantics of Petri nets. Ordinary automata are an unsatisfactory operational model for these formalisms: infinite automata are obtained for all the systems with infinite computations, even for very simple ones; moreover, the ordinary definition of bisimulation does not apply in these cases, thus preventing the reusage of standard theories and algorithms. In this paper we show that HD-automata are an adequate model for the history-dependent formalisms. We present translations of pi-calculus, CCS with localities and Petri nets into HD-automata; and we show that finite HD-automata are obtained for significant classes of systems with infinite computations. We also define HD-bisimulation, both in a set-theoretical way (that is suitable for automatic verification in the case of finite HD-automata) and in a categorical way (by exploiting open maps). HD-bisimulation captures the classical definitions of bisimulation on the considered history dependent formalisms. |

Cisternino, Antonio and Laganà, Maria Rita
The authors of Logo wanted to create a tool to help children build and explore the world of programming as means of building and exploring their own logical thought. The strength of this idea is particular and the turtle has carried his work out well. Today, however, new programming paradigms call for new metaphors and the technological gap between the turtle and modern graphic and multimedia programs seems to make less powerful the original Logo metaphor. In this paper we propose to enhance standard Logo by introducing modern program paradigms. In particular we extend Logo with object-oriented programming and multi-agent programming features. This new language is called YALL and try to change the way of programming Logo without abandon the original philosophy of the language. We introduce a new approach to express the inheritance relation that is more suitable for children. We discuss about these new extensions to Logo describing the new commands introduced. We also discuss a possible implementation of YALL using the Java language by Sun Microsystems. |

Bini, D. A. and Gemignani, Luca and Meini, B.
By providing a matrix version of Koenig's theorem we reduce the problem of evaluating the coefficients of a monic factor $r(z)$ of degree $h$ of a power series $f(z)$ to that of approximating the first $h$ entries in the first row of the inverse of an infinite Toeplitz matrix in block Hessenberg form. This matrix is reduced to a band matrix if $f(z)$ is a polynomial. We devise a suitable algorithm, based on cyclic reduction and on the concept of displacement rank, for generating a sequence of vectors $\B v^{(2^j)}$ that quadratically converges to the vector $\B v$ having as components the coefficients of the factor $r(z)$. In the case of a polynomial $f(z)$ of degree $N$, the cost of computing the entries of $\B v^{(2^j)}$ given $\B v^{(2^{j-1})}$ is $O(N\log N+\theta(N))$, where $\theta(N)$ is the cost of solving an $ N\times N$ Toeplitz-like system. In the case of analytic functions the cost depends on the numerical degree of the power series involved in the computation. The algorithm is strictly related to the Graeffe method for lifting the roots of a polynomial. From the numerical experiments performed with several test polynomials and power series, the algorithm has shown good numerical properties and promises to be a good candidate for implementing polynomial rootfinders based on recursive splitting |

Bruni, Roberto and Meseguer, J. and Montanari, Ugo
In a similar way as 2-categories can be regarded as a special case of double categories, rewriting logic (in the unconditional case) can be embedded into the more general tile logic, where also side-effects and rewriting synchronization are considered. Since rewriting logic is the semantic basis of several language implementation efforts, it is useful to map tile logic back into rewriting logic in a conservative way, to obtain executable specifications of tile systems. We extend the results of earlier work by two of the authors, focusing on some interesting cases where the mathematical structures representing configurations (\ie, states) and effects (\ie, observable actions) are very similar, in the sense that they have in common some auxiliary structure (\eg, for tupling, projecting, etc.). In particular, we give in full detail the descriptions of two such cases where (net) process-like and usual term structures are employed. Corresponding to these two cases, we introduce two categorical notions, namely, symmetric strict monoidal double category and cartesian double category with consistently chosen products, which seem to offer an adequate semantic setting for process and term tile systems. The new model theory of 2EVH-categories required to relate the categorical models of tile logic and rewriting logic is presented making use of a recently developed framework, called partial membership equational logic, particularly suitable to deal with categorical structures. Consequently, symmetric strict monoidal and cartesian classes of double categories and 2-categories are compared through their embedding in the corresponding versions of 2EVH-categories. As a result of this comparison, we obtain a correct rewriting implementation of tile logic. This implementation uses a meta-layer to control the rewritings, so that only tile proofs are accepted. Making use of the reflective capabilities of the Maude language, some (general) internal strategies are then defined to implement the mapping from tile systems into rewriting systems, and some interesting applications related to the implementation of concurrent process calculi are presented. |

Gallo, Giorgio and Scutellà, Maria Grazia
An important class of problems in manufacturing system are those in which there is an item to be produced, a set of components which can be assembled together in order to obtain the wanted item, and a set of machines which are capable to perform the assembly operations. Here we consider the case in which one wants to find an optimal list of assembly operations. After discussing several optimality criteria, we address the problem in which the assembly operations are to be chosen in such a way that they can be scheduled on the available machines at minimum makespan. We show that these problems can be quite naturally modelled making use of Directed Hypergraphs, and describe a general heuristics, showing that in a particular but still relevant case, it yields solutions whose relative error is bounded. We present also a special class of assembly problems which can be solved in polynomial time by means of hyperpath computations. |

Pallotta, Vincenzo and Turini, Franco
This work involves two areas of research in computer science, namely Knowledge Representation and Logic Programming. Starting from the formalism proposed in Features Fluents by Erik Sandewall for describing and reasoning about Inhabited Dynamical Systems, we try to reconstruct it within a Logic Programming framework. The result is an extended logic programming language called Fluent Logic Programming (FLP) capable to deal with the main issues of the frame problem, in the context of Reasoning about Action and Change. From one point of view, FLP is the reconstruction of the proof-theory for the Sandewall's Discrete Fluent Logic by means of Meta-Logic Programming, on the other it is an attempt to integrate Temporal Reasoning and Logic Programming. In order to show soundness and completeness results between the Underlying Semantics and the Meta-Logical Semantics on which FLP is based, we provide a new abstract semantics for FLP as a bridge between them. |

Frangioni, Antonio
We propose a class of generalized Bundle methods where the stabilizi= term can be any closed convex function satisfying certain properties. = We prove finite termination, asymptotic convergence and finite convergenc= e to an optimal point of different variants of the algorithm: for some of= these proofs, f-regularity of the function (a new generalization of inf-= compactness) is required. Our analysis gives a unified convergence proof = for several classes of Bundle methods that have been so far regarded as d= istinct, enhancing on the results known for some of them: furthermore, it= covers methods proposed in the literature that had not previously been r= ecognised as Bundle methods. A novelty in our approach is the proposal of= a dual for the minimization problem: we show that Bundle methods can be = seen as a dual ascent approach to one single nonlinear problem in the dua= l space, where nonlinear subproblems are approximately solved at each ste= p with an inner linearization approach. This interpretation is particular= ly |

Corradini, Andrea and Gadducci, Fabio
We present a categorical formulation of the rewriting of possibly cyclic term graphs, and the proof that this presentation is equivalent to the well-accepted operational definition proposed by Barendregt et al., but for the case of circular redexes, for which we propose (and justify formally) a different treatment. The categorical framework, based on suitable 2-categories, allows to model also automatic garbage collection and rules for sharing/unsharing and folding/unfolding of structures. Furthermore, it can be used for defining various extensions of term graph rewriting, and for relating it to other rewriting formalisms. |

Brandano, S.
This report formally defines the class of all problems on {\em Reasoning about Actions and Change}, where accurate and complete information about actions, together with strict inertia in continuous time, continuous change and alternative results of possibly concurrent and independent actions are the assumed properties. The intended model set for each member in the class is defined in terms of a model-theoretic trajectory semantics. The case is designated, in the {\em Features and Fluents} framework, with the~\KRACi family of reasoning problems. A fix-point characterization of the subclass \KspRAdCi is then given in terms of a simulative algebraic semantics, and show which are the difficulties when approaching the full class with that method. A non-simulative algebraic semantics is then presented as an alternative algebraic characterization of the model-theoretic trajectory semantics. Still the characterization is made in terms of complete lattices and continuous operators on those complete lattices. The algebraic semantics is shown to be correct and complete with respect to the trajectory-semantics, when applied to reasoning problems in \KRACi. Finally, adopting the non-simulative algebraic semantics as proof procedure, a logic-based {\em Calculus~of~Fluents} is defined, via a meta-theoretic extension of the Horn Clause Logic using the non-ground representation of object level variables. The meta-interpreter consists in the Horn clause representation of the proof procedure. The calculus is shown to be decidable, domain independent and class dependent; its semantics consists in the non-simulative algebraic semantics, and is executable as a logic program by the SLD-resolution rule. |

Gallo, Giorgio and Scaparra, Maria Paola
One of the greatest difficulties involved in communication networks project is the need to interconnect different networks to form a unique internetwork. A major headache is the differing maximum packet sizes allowed by each network, which causes oversize packets to be broken up into appropriately sized fragments. Unfortunately, the fragmentation process implies additional costs, tightly bounded up with routing and fragmentation strategies in use. The approach commonly used in IP protocols provides two separated functions which operate in a completely indipendent way. This adversely affects network performance measures, both quantitatively (throughput) and qualitatively (average packet delay), whereas some form of cooperation between the two functions could result in a reduction of this efficiency loss. This paper presents a pseudo-polynomial algorithm for solving the combined fragmentation and routing problem. The method of solution is based on the concept of bicriterion shortest path problem, where two distinct objective functions must be minimized: the path length (in accordance with many classic routing algorithm) and the number of fragments received by the destination host. Besides, the paper deals with the problem of the effective fragments-hops minimization. An algorithm aimed at this purpose is provided together with the analysis of the possibility to reduce its running time under appropriate considerations on the generated fragment sizes. |

Giacobazzi, Roberto and Ranzato, Francesco and Scozzari, Francesca
Completeness is important in approximated semantics design by abstract interpretation, ensuring that abstract computations are as precise as possible wrt concrete ones. The authors recently proved that any abstract domain can be systematically refined or simplified in order to provide its closest possible complete domain. The key problem here remains is to find efficient and appropriate representations for objects in complete abstract domains. In this paper we show that these domain trensformations, when applied on abstract interpretations of quantales, can all be explicitly characterized by elegant linear logic-based formulations. We apply our results to logic program analysis, where unification can be interpreted as multiplicative conjunction in a quantale of idempotent substitutions. We characterize the objects in optimal semantics for logic program analysis, viz. the most abstract concrete semantics for any static program analysis property, and the structure of condensing domains in logic program analysis. |

Gemignani, Luca
Consider a $n \times n$ lower triangular matrix $L$ whose $(i+1)$-st row is defined by the coefficients of the real polynomial $p_i(x)$ of degree $i$ such that $\{ p_i(x)\}$ is a set of orthogonal polynomials satisfying a standard three-term recurrence relation. If $H$ is a $n\times n$ real Hankel matrix with nonsingular leading principal submatrices, then $\widehat{H}=L HL^T$ will be referred as a strongly nonsingular Hankel matrix with respect to the orthogonal polynomial basis $\{p_i(x)\}$. In this paper we develop an efficient $O(n^2)$ algorithm for the solution of a system of linear equations with a real symmetric coefficient matrix $\widehat{H}$ which is a Hankel matrix with respect to a suitable orthogonal polynomial basis. We then apply our method to the solution of a weighted finite moment problem. |

Gemignani, Luca
By representing the $LR$ algorithm of Rutishauser and its variants in a polynomial setting, we derive numerical methods for approximating either all of the roots or a number $k$ of the roots of minimum modulus of a given polynomial $p(t)$ of degree $n$. These methods share the convergence properties of the $LR$ matrix iteration but, unlike it, they can be arranged to produce parallel and sequential algorithms which are highly efficient expecially in the case where $k<<n$. |

Manca, Vincenzo and Martino, D. M.
In this paper we focus on the notion of metabolism and the way it can be expressed in symbolic terms. We show as an adequate logical formalization of this concept could help to unify several approaches based in string rewriting mechanism. This viewpoint suggests new perspectives in the search for more powerful and general methods in the description of symbolic dynamical systems where evolution obeys some internal laws of structural recombination (L-systems, Grammar systems, H-systems, Ecogrammars, DNA computing models). |

Cappanera, Paola and Frangioni, Antonio
Abstract: We study the coarse-grained parallelization of a Bundle-based cost decomposition algorithm for the solution of linear Multicommodity Min Cost Flow problems, that has already been shown to be effective in sequential. The aim of the study was to investigate if exploitation of the natural parallelism inherent in the cost decomposition method, i.e. the simultaneous solution of the Min Cost Flow subproblems, was enough to obtain acceptable efficiencies without modifying the basic approach: as a result, we obtained an highly portable and flexible code which can be used on different machines. The computational results show that satisfactory efficiencies can be obtained even with many processors on large, difficult MMCF problems with many commodities, that are exactly the class of instances where the sequential code attains the best results. We also show how to exploit a common characteristic of nowadays supercomputer facilities, i.e. the side-to-side availability of massively parallel and powerful vector supercomputers, by implementing an asymmetric decomposition algorithm where each architecture is used for the tasks it is suited most. |

Ambriola, Vincenzo and Cignoni, Giovanni A.
According to the Italian law, monitoring is a form of quality control that must be performed in the enactment of all the contracts of great importance related to the information systems of the Italian Public Administration. In this paper, the authors describe and analyse the aspects of monitoring as a tool to guarantee the quality of software production and, as a consequence, to assure the customer satisfaction. The paper presents the administrative context where monitoring is applied, its purpose and the application areas, the relationship with other forms of quality control. In particular, we emphasise the novelty of monitoring and its characteristic of customer initiative. |

Carraresi, Paolo and Farinaccio, Fernanda and Malucelli, Federico
Abstract: The issue tackled is testing whether a given solution of a quadratic 0-1 problem is optimal. The paper presents an algorithm based on the necessary and sufficient optimality condition introduced by Hirriart-Urruty for general convex problems. A measure of the quality of the solution is provided. Computational results show the applicability of the method. The method is extended to constrained quadratic 0-1 problems such as quadratic assignment and quadratic knapsack . |

Gemignani, Luca
We present an iterative numerical method for solving two classical stability problems for a polynomial $p(x)$ of degree $n$: the Routh-Hurwitz and the Schur-Cohn problems. This new method relies on the construction of a polynomial sequence $\{ p^{(k)}(x) \}_ {k\in {\bf N}}$, $p^{(0)}(x)=p(x)$ , such that $p^{(k)}(x)$ quadratically converges to $(x-1)^p (x+1)^{n-p}$ whenever the starting polynomial $p(x)$ has $p$ zeros with positive real parts and $n-p$ zeros with negative real parts. By combining some new results on structured matrices with the fast polynomial arithmetic, we prove that the coefficients of $p^{(k)}(x)$ can be computed starting from the coefficients of $p^{(k-1)}(x)$ at the computational cost of $O(n\log^2 n)$ arithmetical operations. Moreover, by means of numerical experiments, we show that the $O(n\log n)$ bit precision of computations suffices to support the stated computational properties. In this way, apart from a logarithmic factor, we arrive at the current best upper bound of $O(n^3\log^4 n)$ for the bit complexity of the mentioned stability problems. |

Ambriola, Vincenzo and Telmon, Claudio
In questo lavoro si descrive il progetto del sistema di protezione per il sistema informatico del Dipartimento di Informatica. Nella progettazione si è tenuto conto delle peculiarità di questo tipo di organizzazioni di ricerca. Si tratta di organizzazioni tradizionalmente aperte, il cui principale interesse consiste nella disponibilità delle risorse piu` che nella riservatezza dei dati trattati. Il progetto ha quindi come principale obiettivo la protezione di alcune risorse essenziali dalle quali è possibile ripristinare uno stato corretto sui principali sistemi del Dipartimento. Fa eccezione la sottorete amministrativa, per la quale devono essere garantite anche la riservatezza e l'integrità dei dati. Il progetto, completato nell'aprile 1996, è in fase avanzata di realizzazione. |

Viry, Patrick
We introduce a rewriting implementation of the reduction relation of $\pi$-calculus and prove its correctness. The implementation is based on terms with De Bruijn indices and an explicit substitution operator. The resulting rewrite rules need to be applied modulo a large and complex equational theory, and are only of theoretical interest. Applying the coherence techniques introduced in a previous paper, we transform this specification into an equivalent one that only requires rewriting modulo associativity and commutativity. This latter rewrite system can then be straightforwardly encoded in currently available rewriting-based programming languages. Finally, we sketch a possible application of this implementation as the basis for adding input-output capabilities to such languages. |

Degano, Pierpaolo
Atti della giornata per il venticinquennale del Corso di Laurea in Scienze dell'Informazione Proceedings of the silver jubilee of the Computer Science curriculum |

Luccio, Fabrizio and Pagli, Linda
We make two contributions to the theory of PRAM computation, focusing on the problem of computing the Boolean OR (to which most basic problems can be reconducted). First we critically discuss the ideal PRAM model generally adopted to prove parallel time bounds, and introduce the natural definition of simple PRAM. We prove that log(base 2) n steps are needed to compute the OR of n bits in the CREW variant of this model, and that this bound is tight. This implies that some acclaimed "surprising" results in PRAM theory depend strictly on the properties of the model, and not on the computed function. As a second contribution we show how to simplify the most advanced known lower bound proof for computing the OR. The new proof scheme does not introduce new results, but is aimed to enhancing the comprehension of this difficult subject. We also justify a "full information" functioning, generally adopted without discussion. |

Luccio, Fabrizio and Pagli, Linda
\begin{abstract} We work in $B^n$ and consider sets of $2^m$ points, $m \leq n$, represented in binary {\em balanced} matrices, whose columns contain half 0's and half 1's, and this property repeats recursively in proper submatrices. We introduce the concept of {\em pseudo-cube} of order $m$, that is a subset of $2^m$ points of $B^n$ whose matrix is balanced. A subcube $B^m \subseteq B^n$ is a special case of pseudo-cube and shares most of its properties. For a given pseudo-cube $P$ we define the class ${\cal P}(P)$ of the pseudo-cubes obtained from $P$ by complementing any subset of variables, and show that the elements of ${\cal P}(P)$ are disjoint and tessellate $B^n$. Furthermore, the union of any two pseudo-cube of the same class ${\cal P}$ is a pseudo-cube, and the intersection of two arbitrary pseudo-cubes is either empty or is a pseudo-cube. We then introduce {\em pseudo-products} as Boolean functions that have value 1 in the points of a pseudo-cube $P$. These functions inherit all the properties of pseudo-cubes, and have a compact expression EXP($P$). Given two pseudo-products $P_1$, $P_2$ belonging to the same class $\cal P$, we give an algorithm to construct in linear time the expression of the union EXP($P_1 \cup P_2$) from EXP($P_1$) and EXP($P_2$). Finally we show how a standard procedure to generate a minimal "sum of products" form for a Boolean function $f$ can be extended to generate a minimal "sum of pseudo-products" for $f$. This latter form, based on the representation EXP, is generally much shorter than the corresponding Boolean form. \end{abstract} |

Guerrini, S. and Masini, A.
We propose a new formulation for full (weakening and constants included) multiplicative and exponential (MELL) proof nets, allowing a complete set of rewriting rules to parse them. The recognizing grammar defined by such a rewriting system (confluent and strong normalizing on the new proof nets) gives a correctness criterion that we show equivalent to the Danos-Regnier one. |

Gallo, Giorgio and Scutellà, Maria Grazia
We consider the Minimum cost hyperflow problem, a generalization of the minimum cost flow problems on standard and on generalized graphs. In order to solve this problem, a specialization of the primal Simplex Algorithm, named the Hypergraph Simplex Algorithm, has been recently proposed. Here, the results of a wide experimentation on random hypergraphs and on some kinds of structured hypergraphs are reported. In the experimentation, a C implementation of the Hypergraph Simplex Algorithm has been compare wth the primal version of CPLEX code, by obtaining rather interesting results. In fact, when the hypergraph size increases, our code runs faste than CPLEX; in particular, its solution time is a rather slow increasing fnction of the hypergraph size. The problem under consideration has very interesting applications, which can be modelled and solved in terms of hypergraph flows. These include Multicommodity flows and applications in the area of flexible manufacturing systems, like the One-Dimensional Cutting Stock Problem and Assembly Problems. In particular, it is shown how to model Assembly Problems with a given degree, say k, of parallelism (i.e. when k parallel identical machinesare available for the execution of the operations) in terms of hyperflows. Finally, some Economic Applications are formulated and interpreted in terms of minimum cost hyperflow problems. |

Degano, Pierpaolo and Priami, Corrado
A major drawback of representing concurrent systems as transition systems is the exponential size of their semantic representations. We present a way of obtaining compact transition systems that are mildly exponential in average. The idea is to have (at least) one computation to represent all the ones only differing for the temporal ordering in which concurrent transitions occur. We characterise through axioms the transition systems which may be reduced in size, still preserving non interleaving bisimulation equivalences. We also exemplify our reduction technique on a process calculus whose compact transition system is generated in SOS style. |

Gadducci, Fabio and Montanari, Ugo
In this paper we introduce a model for a wide class of computational systems, whose behaviour can be described by certain rewriting rules. We gathered our inspiration both from the world of term rewriting, in particular from the {\em rewriting logic} framework \cite{Mes92}, and of concurrency theory: among the others, the {\em structured operational semantics} \cite{Plo81}, the {\em context systems} \cite{LX90} and the {\em structured transition systems} \cite {CM92} approaches. Our model recollects many properties of these sources: first, it provides a compositional way to describe both the states and the sequences of transitions performed by a given system, stressing their distributed nature. Second, a suitable notion of typed proof allows to take into account also those formalisms relying on the notions of {\em synchronization} and {\em side-effects} to determine the actual behaviour of a system. Finally, an equivalence relation over sequences of transitions is defined, equipping the system under analysis with a concurrent semantics, where each equivalence class denotes a family of ``computationally equivalent'' behaviours, intended to correspond to the execution of the same set of (causally unrelated) events. As a further abstraction step, our model is conveniently represented using double-categories: its operational semantics is recovered with a free construction, by means of a suitable adjunction. |

Corradini, Andrea and Montanari, Ugo and Rossi, Francesca and Ehrig, H. and Heckel, Reiko and Loewe, M.
The algebraic approaches to graph transformation are based on the concept of gluing of graphs, modelled by pushouts in suitable categories of graphs and graph morphisms. This allows one not only to give an explicit algebraic or set theoretical description of the constructions, but also to use concepts and results from category theory in order to build up a rich theory and to give elegant proofs even in complex situations. In this chapter we start with an overwiev of the basic notions common to the two algebraic approaches, the "double-pushout (DPO) approach" and the "single-pushout (SPO) approach"; next we present the classical theory and some recent development of the double-pushout approach. The next chapter is devoted instead to the single-pushout approach, and it is closed by a comparison between the two approaches. -- This document will appear as a chapter of the "The Handbook of Graph Grammars. Volume I: Foundations" , G. Rozenberg (Ed.), World Scientific. |

Priami, Corrado
Those that follow are the abstracts of the demonstrations at the CONCUR conference held in Pisa in August 1996. The demonstrations cover different aspects of theory of concurrency. They range from process algebras verifiers to model checkers as well as Petri nets support tools. There are also demonstrations on the check of security properties and performance analysis based on process algebra descriptions. Finally, a demonstration of game aspects with CWB is presented. |

Pedreschi, Dino and Ruggieri, Salvatore
Recentely, a new approach to verification of logic and Prolog programs has been proposed, whose main advantage is the possibility to reason on different properties in a unified framework. In this paper, we show an equivalent formulation of that proof method which is well-suited for modular program verification. The notion of modularity taken into account is based on stratification. We show that the proofs of correctness of a program P and a program Q can be re-used in the proof of correctness of P "union" Q with small additional efforts. The main advantage consists in the fact that the proofs for P and Q are developed in a fully independent way. |

Bagnara, Roberto
Many interesting analyses for constraint logic-based languages are aimed at the detection of \emph{monotonic} properties, namely of properties which are preserved as the computation progresses. Our basic claim is that most, if not all, of these analyses can be described within a unified notion of constraint domains. We present a class of constraint systems which allows for a smooth integration within an appropriate framework for the definition of non-standard semantics of constraint logic-based languages. Such a framework is also presented and motivated. We then show how such domains can be built, as well as construction techniques which induce a hierarchy of domains. In particular, we propose a general methodology for domain combination with asynchronous interaction (i.e.~the interaction is not necessarily synchronized with the domains' operations). Following this methodology, interesting combinations of domains can be expressed with all the the semantic elegance of concurrent constraint programming languages. |

Pallottino, Stefano and Schettino, Alberta
Viene descritta l'implementazione di un modello di assegnazione dei passeggeri a una rete di trasporto interurbano con vincoli espliciti di capacit\`a sui veicoli. Inizialmente vengono determinati i cammini attivi rispetto a una soglia \epsilon, detta minimale, e successivamente viene assegnata la domanda a detti cammini. Il codice di calcolo descritto determina, oltre alla soglia minimale, anche il flusso di assegnazione. |

Montangero, Carlo and Semini, Laura
A refinement calculus provides a number of advantages to program development, besides correctness, like clarification of the goals of the program and effective documentation of the design choices. In this paper, we provide evidence that the same advantages can be obtained also in the case of those special programs that are known as enactable process models. The evidence is put forward by means of an example, a small Concurrent Engineering problem inspired to the IWSP7 problem. We assume that the enactment is done by rules in tuple spaces, and use a refinement calculus based on a temporal logic that builds on that of Unity. Finally, we show how the approach leads to seamless integration with existing process engines (the Oikos one in our case). |

Falaschi, M. and Gabbrielli, M. and Marriott, K. and Palamidessi, C.
Concurrent constraint programming (ccp), like most of the concurrent paradigms, has a mechanism of global choice which makes computations dependent on the scheduling of processes. This is one of the main reasons why the formal semantics of ccp is more complicated than the one of its deterministic and local-choice sublanguages. In this paper we study various subsets of ccp obtained by adding some restriction on the notion of choice, or by requiring confluency, i.e. independency from the scheduling strategy. We show that it is possible to define simple denotational semantics for these subsets, for various notions of observables. Finally, as an application of our results we develop a framework for the compositional analysis of full ccp. The basic idea is to approximate an arbitrary ccp program by a program in the restricted language, and then analyze the latter, by applying the standard techniques of abstract interpretation to its denotational semantics. |

Bagnara, Roberto
The subject of groundness analysis for (constraint) logic programs has been widely studied, and interesting domains have been proposed. \textit{Pos} has been recognized has the most suitable domain for capturing the kind of dependencies arising in groundness analysis. Its (by now standard) implementation is based on \emph{reduced ordered binary-decision diagrams} (ROBDDs), a well-known symbolic representation for Boolean functions. Even though several authors have reported positive experiences using ROBDDs for groundness analysis, in the literature there is no reference to the problem of the efficient detection of those variable which are deemed to be ground in the context of a ROBDD. This is not surprising, since most currently implemented analyzers need to derive this information only \emph{at the end} of the analysis and only for presentation purposes. Things are much different when this information is required \emph{during} the analysis. This need arises when dealing with languages which employ some sort of \emph{delay mechanism}, which are typically based on groundness conditions. In these cases, the \emph{na\"\i{}f} approaches are too inefficient, since the abstract interpreter must quickly (and often) decide whether a constraint is delayed or not. Fast access to ground variables is also necessary when aliasing analysis is performed using a domain not keeping track of ground dependencies. In this paper we introduce and study the problem, proposing two possible solutions. The second one, besides making possible the quick detection of ground variables, has also the effect of keeping the ROBDDs as small as possible, improving the efficiency of groundness analysis in itself. |

Brogi, Antonio and Contiero, Simone
Meta-level compositions of object logic programs are naturally implemented by means of meta-programming techniques. Meta-interpreters defining program compositions however suffer from a computational overhead that is due partly to the interpretation layer present in all meta-programs, and partly to the specific interpretation layer needed to deal with program compositions. We show that meta-interpreters implementing compositions of object programs can be fruitfully specialised w.r.t. meta-level queries of the form "Demo(E,G)" , where "E" denotes a program expression and "G" denotes a (partially instantiated) object level query. More precisely, we describe the design and implementation of a declarative program specialiser that suitably transforms such meta-interpreters so as to sensibly reduce --- if not to completely remove --- the overhead due to the handling of program compositions. In many cases the specialiser succeeds in eliminating also the overhead due to meta-interpretation. |

Pedreschi, Dino and Ruggieri, Salvatore
A novel approach to the verification of meta-interpreters is introduced. We apply a general purpose verification method for logic programs, proposed by the authors, to the case study of the Vanilla and other logic meta-interpreters. We extend the standard notion of declarative correctness, and design a criterion for proving correctness of meta-interpreters in a general sense, including amalgamated and reflective meta-interpreters. The contribution of this paper can be summarized as follows: under certain natural assumptions, all interesting verification properties lift up from the object program to the meta-program, including partial correctness, termination, absence of errors, call patterns persistence, correct instances of queries, computed instances of queries. Interestingly, it is possible to establish these results on the basis of purely declarative reasoning, using the mentioned proof method. We believe that the obtained results illustrate the broad applicability of the adopted verification principles. |

Frangioni, Antonio and Gallo, Giorgio
We present a cost decomposition approach to Linear Multicommodity Min Cost Flow problems, based on dualizing the mutual capacity constraints: the resulting Lagrangean Dual is solved by means of a new, specialized Bundle type-dual ascent algorithm. Although decomposition approaches are generally believed not to be competitive, even for the solution of large-scale network structured problems, we present evidence based on extensive computational comparisons that a careful implementation of a decomposition algorithm can outperform several other approaches, especially on problems where the number of commodities is "large" w.r.t. the size of the graph. Our specialized Bundle algorithm is characterised by a new heuristic for the trust region parameter handling, and embeds a custom, fast Quadratic Programming solver that permits the implementation of a Lagrangean variables generation strategy. The Lagrangean Relaxation solver is capable of exploiting the structural properties of the single-commodity Min Cost Flow subproblems to avoid using a "generic" MCF solver whenever it is possible. The proposed approach can be easily extended to handle extra constraints or variants such as the Nonsimultaneous Multicommodity problem. In our computational experience, we also studied the impact on the relative efficiencies of the different approaches tested of some characteristics, such as the number of commodities w.r.t. the total size of the problem. |

Luccio, Fabrizio and Pagli, Linda
We introduce spatial graphs through a natural extension of the concept of planarity in three dimensions. For a graph G=(V,E), a face is a simple cycle of edges, and a complete set of faces is such that each edge belonging to a cycle in E is on at least one face in the set. G is spatial if admits a three-dimensional representation R with a complete set of faces consisting of simple surfaces, such that no two faces intersect except along the common edge. In particular G is called k-spatial if R includes k cells (spatial regions); and is called plane-face spatial if all the faces in R are plane. We prove that all graphs are 1-spatial, but not all of them are (k>1)-spatial or plane-face spatial. In particular, $K_n$ is $(n^2-3n)/2$-spatial and plane-face spatial. We derive some basic properties of spatial graphs as natural three-dimensional extensions of the properties of planar graphs. |

Carraresi, Paolo and Frangioni, Antonio and Nonato, Maddalena
Among practical methods for large-scale for Nondifferentiable Optimization (NDO), Bundle methods are widely recognized to play a relevant role; despite their potential, however, they are not often utilized for maximization of polyhedral functions, that appears in many different context such as Lagrangean Duals and decomposition algorithms, since simpler-to-program but less efficient approaches like subgradient methods are preferred. The aim of this work is to provide an applications-oriented survey of the theory of Bundle methods when applied to problems arising in continuous and combinatorial optimization, with an introduction to the several variants of Bundle approaches that can be built up by using a limited set of basic concepts and tools. |

Petrini, Fabrizio and Vanneschi, Marco
The past few years have seen a rise in popularity of massively parallel architectures that use fat-trees as their interconnection In this paper we formalize a parametric family of fat-trees, the k-ary n-trees, built with constant arity switches interconnected in a regular topology. A simple adaptive routing algorithm for k-ary n-trees sends each message to one of the nearest common ancestors (NCA) of both source and destination choosing the less loaded physical channels and then reaches the destination following the unique available path. Through simulation on a 4-ary 4-tree with 256 nodes, we analyze some variants of the adaptive algorithm that utilize wormhole routing with 1, 2 and 4 virtual channels. The experimental results show that the uniform, bit reversal and transpose traffic patterns are very sensitive to the flow control strategy. In all these cases, the saturation points are between 35-40% of the network capacity with 1 virtual channel, 55-60% with 2 virtual channels and around 75% with 4 virtual channels and after saturation, the network throughput remains stable. Increasing the number of virtual channels slightly worsens the average network latency, but improves the bounds on the distribution. The complement traffic, a representative of the class of the congestion-free communication patterns, reaches an optimal performance, with a saturation point at 97% of the capacity for all flow control strategies. In this case virtual channels are of little help and the average network latency experiences an increase proportional to the number of virtual channels, due to the multiplexing of several packets onto the same physical path. |

Pallottino, Stefano and Scutellà, Maria Grazia
We consider dual approaches for the Shortest Path Tree Problem. After a brief introduction to the problem, we review the most important dual algorithms which have been described in the literature for its solution, and propose a new family of dual ascent algorithms. In these algorithms, "local" and "global" dual updating operations are performed at the nodes in order to enlarge a partial shortest path tree, which at the beginning contains only the root node, until a shortest path tree is found. Several kinds of dual updating operations are proposed, which allow one to derive different dual algorithms from a general schema. One of them, in particular, which is based only on global operations, can be viewed as a dual interpretation of Dijkstra's classical algorithm. Due to their structure, all the proposed approaches are suitable for parallel implementations. They are also suitable for reoptimization approaches, when the computation of shortest paths from different root nodes is required. |

Pasetto, Davide and Vanneschi, Marco
Structured parallel programming is one of the possible solutions to exploit Programmability, Portability and Performance in the parallel programming world. Programming using high level parallel constructs permits the programmer to focus on the development of the parallel algorithms rather than on their low level implementation. The power of this approach stands in the possibility of modeling the performance of the high level constructs and building optimizing template-based compilers. These compilers use formulas describing the performance of language constructs over the target architectures and tune their parametric implementation to achieve high performance figures. In this paper, we describe a set of applications developed using such a programming methodology and show the results obtained. |

Pasetto, Davide and Vanneschi, Marco
Structured parallel programming is one of the possible solutions to exploit Programmability, Portability and Performance in the parallel programming world. The power of this approach stands in the possibility to build an optimizing template--based compiler using low time complexity algorithms. In order to optimize the code, this compiler needs formulas that describe the performance of language constructs over the target architecture. We propose a set of parameters able to describe current parallel systems and build deterministic analytical models for basic forms of parallelism. The analytical model describes construct performance in a parametric way.This can be done by knowing that the compiler exploits a template--based support and giving template implementors guidelines to follow to make actual implementation perform as predicted. |

Viry, Patrick
We present a transformation from any conditional rewrite systems into non conditional ones and prove its correctness. The transformed systems are quite intuitive and well suited for parallel execution. We also show how termination and confluence of the original system are preserved in the transformed one. |

Viry, Patrick
We introduce rewriting with two sets of rules, the first interpreted equationally and the second not. A semantic view considers equational rules as defining an equational theory and reduction rules as defining a rewrite relation modulo this theory. An operational view considers both sets of rules as similar. We introduce sufficient properties for these two views to be equivalent (up to different notions of equivalence). The paper ends with a collection of example showing the effectiveness of this approach. |

Frangioni, Antonio
Bundle methods for Nondifferentiable Optimization are widely recognised as one of the best choices for the solution of Lagrangean Duals; one of their major drawbacks is that they require the solution of a Semidefinite Quadratic Programming subproblem at every iteration. We present an active-set method for the solution of such problems, that enhances upon the ones in the literature by distinguishing among bases with different properties and exploiting their structure in order to reduce the computational cost of the basic step. Furthermore, we show how the algorithm can be adapted to the several needs that arises in practice within Bundle algorithms; we describe how it is possible to allow constraints on the primal direction, how special (box) constraints can be more efficiently dealt with and how to accommodate changes in the number of variables of the nondifferentiable function. Finally, we describe the important implementation issues, and we report some computational experience to show that the algorithm is competitive with other QP codes when used within a Bundle code for the solution of Lagrangean Duals of large-scale (Integer) Linear Programs. |

Petrini, Fabrizio and Vanneschi, Marco
Many theoretical models of parallel computation are based on overly simplistic assumptions on the performance of the interconnection network. For example they assume constant latency for any communication pattern or infinite bandwidth. This paper presents a case study based on the FFT transpose algorithm, which is mapped on two families of scalable interconnection networks, the k-ary n-trees and the k-ary n-cubes. We analyze in depth the network behavior of a minimal adaptive algorithm for the k-ary n-trees and three algorithms for the k-ary n-cubes, each offering an increasing degree of adaptivity: the deterministic routing, a minimal adaptive routing based on Duato's methodology and the Chaos routing, a non-minimal adaptive cut-through version of the hot potato routing. The simulation results collected on topologies with up to 256 processing nodes show that the k-ary n-trees can efficiently support the basic FFT algorithm by factoring the personalized broadcast in a sequence of congestion-free steps. Though k-ary n-cubes are less favored in terms of bisection bandwidth, we can narrow the performance gap between the two interconnection networks by properly interleaving communication and computation. While in the presence of bandwidth-bound patterns the communication latency becomes difficult to predict, the global accepted network bandwidth converges to a fixed value after a stabilization period, though both adaptive algorithms on the cubes suffer from post-saturation problems. |

Cambini, Riccardo and Gallo, Giorgio and Scutellà, Maria Grazia
We consider the capacitated minimum cost flow problem on directed hypergraphs. We define spanning hypertrees so generalizing the spanning tree of a standard graph, and show that, like in the standard and in the generalized minimum cost flow problems, a correspondence exists between bases and spanning hypertrees. Then, we show that, like for the network simplex algorithms for the standard and for the generalized minimum cost flow problems, most of the computations performed at each pivot operation have direct hypergraph interpretations. |

Carboni, Marilisa and Giannotti, Fosca and Manco, Giuseppe and Pedreschi, Dino
We show how a fragment of the logical data language LDL++ can be used to define an abstract machine supporting the essence of active/deductive object-oriented databases. We exhibit a compilation into the mentioned logic language of the basic features of the object model, including the schema definition language, the query language with multiple roles, the basic update operations, and a form of active rules. The target logic language, which is essentially Datalog extended with non-determinism and a form of stratified negation, has been chosen with a twofold aim. On one side, it should provide an abstract implementation level, where the object model is declaratively reconstructed and its semantics clarified. On the other side, the proposed compilation should form the basis of a realistic implementation, as LDL++ can be efficiently executed, and supports real side effects. |

Bacci, Bruno and Cantalupo, Barbara and Guerrini, Nicola and Pelagatti, Susanna
In this document, we discuss a prototype compiler of P3L, which is a structured parallel programming language, on a network of heterogeneous workstations. P3L is implemented on top of the abstract parallel machine provided by PVM. In particular, we discuss the definition of a new set of implementation templates, which are designed in order to meet the target features, and we show how the correct configurations for the abstract machine can be automatically generated within the support. (The document is written in Italian.) |

Sperduti, Alessandro and Starita, Antonina
We show how Labeling RAAM (LRAAM) can be exploited to generate `on the fly' neural networks for associative access of labeled structures. The topology of these networks, that we call Generalized Hopfield Networks (GHN), depends on the topology of the query used to retrieve information, and the weights on the networks' connections are the weights of the LRAAM encoding the structures. A method for incremental discovering of multiple solutions to a given query is presented. This method is based on terminal repellers, which are used to `delete' known solutions from the set of admissible solutions to a query. Terminal repellers are also used to implement exceptions at query level, i.e., when a solution to a query must satisfy some negative constraints on the labels and/or substructures. Besides, the proposed model solves very naturally the connectionist variable binding problem at query level. Some results for a tree-like query are presented. Finally, we define a parallel mode of execution, exploiting terminal repellers, for the GHN, and we propose to use terminal attractors for implementing shared variables and graph queries. |

Paiano, Pietro and Pallottino, Stefano
In un recente lavoro [13] e` stato proposto un modello di tipo logit gerarchico per l'assegnazione dei flussi ad una rete di trasporto collettivo urbano. Nel modello viene ipotizzato, su alcune ipotesi comportamentali dei passeggeri, che ad ogni fermata e per ogni destinazione venga considerato come attivo al piu` un iperarco di salita. In questo lavoro, dopo un'introduzione al modello logit di Dial [6] volta a mostrare come per il trasporto privato un modello logit di tipo globale coincide con un modello gerarchico, viene analizzato il modello proposto in [13] e vengono formulate ipotesi alternative di comportamento dei passeggeri che implicano la possibile presenza di piu` iperarchi attivi alla medesima fermata per la medesima destinazione. Viene quindi proposta una tecnica di risoluzione del modello, piu` generale di quella descritta in [13], e vengono mostrati gli effetti delle diverse ipotesi su alcuni casi test. |

Ruggieri, Salvatore
We introduce the notion of $\exists$-universal termination of logic programs. A program P and a goal G $\exists$-universally terminate iff there exists a selection rule S such that every SLD-derivation of P U { G }$via S is finite. We claim that it is an essential concept for declarative programming, where a crucial point is to associate a terminating control strategy to programs and goals. We show that $\exists$-universal termination and universal termination via fair selection rules coincide. Then we offer a characterization of $\exists$-universal termination by defining fair-bounded programs and goals. They provide us with a correct and complete method of proving $\exists$-universal termination. We show other valuable properties of fair-bounded programs and goals, including persistency, modularity, ease of use in paper & pencil proofs, automatization of proofs. |

Giacobazzi, Roberto and Ranzato, Francesco
Completeness in abstract interpretation is an ideal situation where the abstract semantics is able to take full advantage of the power of representation of the underlying abstract domain. Thus, complete abstract interpretations can be rightfully considered as optimal. In this article, we develop a general theory of completeness in abstract interpretation, also dealing with the most frequent case of least fixpoint semantics. We show that both completeness and least fixpoint completeness are properties that only depend on the underlying abstract domain. In this context, we demonstrate that there always exist both the greatest complete and least fixpoint complete restrictions of any abstract domain, and for continuous semantic functions, the least complete extension exists as well. Also, under certain hypotheses, a constructive characterization of the least complete extensions and restrictions of abstract domains is given as solutions of simple abstract domain equations. These methodologies provide advanced algebraic tools for manipulating and comparing abstract interpretations, which can be fruitfully used both in program analysis and in semantics design. A number of examples illustrating these techniques are given in the fields of integer variable and logic program analysis. |

Luccio, Fabrizio and Mahafzah, M. and Omari, M. and Pagli, Linda
We introduce the new Masked Interval Routing Scheme, MIRS for short, where a maskis added to each interval to indicate particular subsets of "consecutive" labels. Interval routing becomes more flexible, with the classical IRS scheme being a special case of MIRS. We then take two directions. First we show that the interval information stored in the network may be drastically reduced in the hard cases, proving that in globe graphs of O(n^2) vertices the number of intervals per edge goes down from Omega(n) to O(log n). The technique is then extended to globe graphs of arbitrary dimensions. Second we show that MIRS may be advantageously used in fault-tolerant networks, proving that optimal routing with one interval per edge is still possible in hypercubes with a "harmless" subset of faulty vertices. This work is aimed to introducing a new technique. Further research is needed in both the directions taken here. Still, the examples provided show that MIRS may be useful in practical applications. |

van Breugel, Franck
For a simple CCS-like language a trace semantics and a failure semantics are presented. The failure semantics is shown to be fully abstract with respect to the trace semantics if and only if the set of internal actions is infinite. |

van Breugel, Franck
Well-known metric spaces for modelling finitely branching and image finite systems are shown to be (the carrier of) terminal coalgebras. |

Ambriola, Vincenzo and Cignoni, Giovanni A.
According to Italian law, monitoring is a kind of quality control that must be performed during the carrying out of contracts regarding the information systems of Italian Public Bodies. This technical report describes and analyses monitoring as a tool to guarantee the quality of software production and consequently to assure the customer's satisfaction. It presents the institutional context in which monitoring is used, its activities, its purposes, its areas of application, and its relation to other kinds of quality control. |

Pedreschi, Dino and Ruggieri, Salvatore
This paper investigates the advantages of reasoning on logic programs and queries that have only successful derivations. We consider an extension of the logic programming paradigm that combines guarded clauses, in the style of concurrent logic languages, and dynamic selection rules. Some general conditions for a class of programs and queries are stated, which characterize when successful derivations only are present. A few practical instances of the general conditions are studied, and their applicability to real programs demonstrated. The main contributions of the proposed method are: (i) don't care non determinism can be safely adopted for programs that do not fail, (ii) termination of process networks expressed as logic programs can be proved, by means of a simple proof method developed for a fixed selection rule, and (iii) a strategy for parallelizing terminating Prolog programs is identified. |

Corradini, Andrea and Drewes, Frank
"Acyclic" Term Graphs are able to represent terms with sharing, and the relationship between Term Graph Rewriting (TGR) and Term Rewrtiting (TR) is now well understood. During the last years, some researchers considered the extension of TGR to possibly "cyclic" term graphs, which can represent possibly infinite, rational terms. In [Kennaway et.al.,1994] the authors formalize the classical relationship between TGR and TR as an "adequate mapping" between rewriting systems, and extend it by proving that unraveling is an adequate mapping from cyclic TGR to rational, infinitary term rewriting: In fact, a single graph reduction may correspond to an infinite sequence of term reductions. Using the same notions, we propose a different adequacy result, showing that unraveling is an adequate mapping from cyclic TGR to "rational parallel term rewriting" , where at each reduction infinitely many rules can be applied in parallel. We also argue that our adequacy result is more natural than that proposed in [Kennaway et.al.,1994], because the length of the reduction sequences is preserved by unraveling, and collapsing rules are treated in a completely uniform way. |

Bini, D. A. and Gemignani, Luca
An algorithm for the computation of the LU factorization over the integers of an $n\times n$ Bezoutian $B$ is presented. The algorithm requires $O(n^2)$ arithmetic operations and involves integers having at most $O(n\log nc)$ bits, where $c$ is an upper bound of the moduli of the integer entries of $B$. As an application, by using the correlations between Bezoutians and the Euclidean scheme, we devise a new division-free algorithm for the computation of the polynomial pseudo-remainder sequence of two polynomials of degree at most $n$ in $O(n^2)$ arithmetic operations. The growth of the length of the integers involved in the computation is kept at the minimum value, i.e., $O(n\log nc)$ bits, no matter if the sequence is normal or not, where $c$ is an upper bound of the moduli of the input coefficients. The algorithms, that rely on the Bareiss technique and on the properties of the Schur complements of Bezoutians , improve the previous ones. |

van Breugel, Franck
A labelled transition system is presented for Milner's pi_epsilon-calculus. This system is related to the reduction system for the calculus presented by Bellin and Scott. Also a reduction system and a labelled transition system for pi_epsilonI-calculus are given and their correspondence is studied. This calculus is a subcalculus of pi_epsilon-calculus in the way Sangiorgi's piI-calculus is a subcalculus of ordinary pi-calculus. |

Gervasi, Vincenzo and Raffaetà, Alessandra
The marriage between logic programming and databases has given rise to the definition of deductive databases. These systems allow the users to express data manipulations and queries in a declarative way, and permit efficient storage and retrieval of intensional knowledge. Another improvement in the database field has come from the use of active rules, linking the occurrence of certain events to a reaction (e.g., updates of some data). This kind of rules has proven to be very useful to ensure integrity constraints and to automatize common procedures. This paper presents an integration of active rules in U-Datalog, which is an extension to Datalog supporting declarative specification of updates based on a nonimmediate update semantics. The resulting language, called Active-U-Datalog, extends the semantics of U-Datalog in a conservative way, introducing a PARK-like semantics for active rules activation and firing, and for handling conflicting update requests. |

Giacobazzi, Roberto
We introduce a practical method for abductive analysis of modular logic programs. This is obtained by reversing the deduction process, which is usually applied in static-dataflow analysis of logic programs, on generic, possibly abstract, domains for analysis. The approach is validated in the framework of abstract interpretation. The abduced information provides an abstract specification for program modules which can be of assistance both in top-down development of programs and in compile-time optimization. To the best of our knowledge this is the first application of abductive reasoning in dataflow analysis of logic programs. |

Giacobazzi, Roberto and Ranzato, Francesco
We define the notion of meet-uniformity for closure operators on a complete lattice, which corresponds to co-additivity restricted to subsets mapped into the same element, and we study its properties. A class of closures given by principal filters and the downward closures are relevant examples of meet-uniform closures. Next, we introduce a lifting of a complete order by means of a meet-uniform closure. Our main results show that this lifting preserves the complete lattice structure, and allows the meet-uniform closure to become fully co-additive. |

Giacobazzi, Roberto and Ranzato, Francesco
In the context of standard abstract interpretation theory, we define and study a systematic operator of reduced relative power for composing functionally abstract domains. The reduced relative power of two abstract domains D1 (the exponent) and D2 (the base) consists in a suitably defined lattice of monotone functions from D1 to D2, called dependencies, and it is a generalization of the Cousot and Cousot operator of reduced (cardinal) power. The relationship between the reduced relative power and Nielson's tensor product is also investigated. The case of autodependencies (when base and exponent are the same domain) turns out to be particularly interesting: Under certain hypotheses, the domain of autodependencies corresponds to a suitable powerset-like completion of the base abstract domain, providing a compact lattice-theoretic representation for autodependencies. The usefulness of the reduced relative power is shown by a number of applications to abstract domains used both for program analysis and for semantics design. Notably, we prove that the domain Def for logic program ground-dependency analysis can be characterized as autodependencies of a standard more abstract domain representing plain groundness only, and we show how to exploit reduced relative power to systematically derive compositional logic program semantics. |

Pallottino, Stefano and Scutellà, Maria Grazia
Shortest Path Problems are among the most studied network flow optimization problems, with interesting applications in various fields. One of such field is transportation, where shortest path problems of different kinds need to be solved. Due to the nature of the application, transportation scientists need very flexible and efficient shortest path procedures, both from the running time point of view, and also for the memory requirements. Since no "best" algorithm currently exists for every kind of transportation problem, research in this field has recently moved to the design and implementation of "ad hoc" shortest path procedures, which are able to capture the peculiarities of the problems under consideration. The aim of this work is to present in a unifying framework both the main algorithmic approaches that have been proposed in the past years for solving the shortest path problems arising most frequently in the transportation field, and also some important implementaion techniques which allow one to derive efficient procedures from the general algorithmic schema, in line with trends in current research. In the first part of the paper, afetr presenting the problem, we review those classical primal and dual algorithms which seem to be the most interesting in transportation. Promising reoptimization approaches are then discussed. The second part is devoted to dynamic shortest path problems, which arise very frequently in the transportation field. We analyse the main features and propose a general "chronological" algorithmic paradigm, called Chrono-SPT. We study several special cases, and investigate promising research avenues related to various extensions (time-windows. turn penalties, multicriteria and shortest hyperpaths). |

Pisanti, Nadia
The aim of this report is to make a review on DNA computing: molecular biology is used to suggest a new way of solving a NP-complete problem. The idea (due to Leonard Adleman in [Adl94]) is to use strands of DNA to encode the (instance of the) problem, and to manipulate them using techniques commonly available in any molecular biology laboratory, in order to simulate operations that select the solution of the problem, if it exists. After Adleman's paper (appeared on "Science" in November 1994), many authors have been interested in DNA computing. We will try to give a description of the discussions and the results which appeared in literature. |

Gemignani, Luca
Frequently, in control system design, we are asked to locate the roots of a bivariate polynomial of the following form \begin{eqnarray*} H(s,k)=\sum_{i=0}^n Q_i(s) k^i\in {\bf Z}[k,s]={\bf Z}[k][s]={\bf D}[s], \end{eqnarray*} where $Q_i(s)\in {\bf Z}[s]$ for each $i$ and, moreover, $k$ is a free parameter ranging in some real interval. For a fixed value $\bar k$ of $k$, the zero distribution of the univariate polynomial $p(s)=H(s,\bar k)$ with respect to the imaginary axis can be found by determining the inertia of a Bezout matrix $B=B(\bar k)$ whose entries are expressed in terms of the coefficients of $p(s)$. This evaluation is usually accomplished by computing a block factorization of $B$, namely, $U^TBU=D$ where $D$ is a block diagonal matrix with lower triangular blocks with respect to the antidiagonal. It is intended in this paper to propose an efficient hybrid approach for determining the zero-distribution of $H(s,k)$ with respect to the imaginary axis for any value of $k$. We develop a fast fraction-free method for factoring the Bezout matrix $B(k)$ with entries over ${\bf D}$ determined by $H(s,k)\in {\bf D}[s]$. In this way, we easily compute the sequence $\{\phi_i(k)\}$ of the trailing principal minors of $B(k)$. For almost any value $\bar k$ of $k$ the associated sign sequence $\{sign(\phi_i(\bar k))\}$ specifies the inertia of $B(\bar k)$ and, therefore, the zero-distribution of $H(s,\bar k)$. The function $sign(\phi_i( k))$ is finally obtained by numerically computing rational approximations of the real zeros of $\phi_i(k)\in {\bf Z}[k]$. |

Manca, Vincenzo
In this paper first order logic is used for expressing syntax of formal languages. A model Syn is introduced and it is shown that whitin it syntactic relations and classes of languages are representable by means of first order formulae. The class of Sigma*(1) formulae is defined as an extension of arithmetical Sigma(1) formulae; then, a theory is considered where many syntactical notions can be developed in terms of axiomatic representability. This logical approach seems adequate to analyze and to integrate in a unique framework several grammatical models that emerge in the search for new computational paradigms. |

Manconi, Carlo and Cantalupo, Barbara
This document concerns with the implementation of the P3L parallel programming language on the MPI abstract Machine. First we focus our attention on the characteristics of the MPI Abstract Machine, then we present a preliminary implementaton for each P3L construct based on that architecture. |

Pedreschi, Dino and Ruggieri, Salvatore
We propose a proof method in the style of Hoare's logic, aimed at providing a unifying framework for the verification of logic and Prolog programs with respect to their specifications. The method, which relies on purely declarative reasoning, has been designed as a trade-off between expressive power and ease of use. On the basis of a few simple principles, we reason uniformly on several properties of logic and Prolog programs, including partial correctness, total correctness, absence of run-time errors, safe omission of the occur-check, computed answers, modular program development. We finally generalize the method to general programs. |

Brogi, Antonio and Contiero, Simone and Turini, Franco
The program composition approach can be fruitfully applied to combine general logic programs, i.e. logic programs possibly containing negative premises. We show how the introduction of a basic set of (meta-level) composition operations over general programs increases the knowledge representation capabilities of logic programming for non-monotonic reasoning. Examples of modular programming, hierarchical reasoning, constraints and rules with exceptions will be illustrated. The semantics of programs and program compositions is defined in terms of three-valued logic. The computational interpretation of program compositions is formalised by an equivalence preserving syntactic transformation of arbitrary program compositions into standard general programs. |

Gemignani, Luca
A new algorithm is presented for computing an integer polynomial similar to the GCD of two polynomials $u(x)$ and $v(x)$ $\in {\bf Z}[x]$, $\deg(u(x))=n\geq \deg(v(x)) $. Our approach uses structured matrix computations involving Bezout matrices rather than Hankel matrices. In this way we reduce the computational costs showing that the new algorithm requires $O(n^2)$ arithmetical operations or $O(n^4(\log^2 n +l^2))$ Boolean operations, where $l=\max \{ \log(\parallel u(x) \parallel_{\infty}), \log(\parallel v(x) \parallel_{\infty})\}$. |

Gemignani, Luca
We present new algorithms using structured matrix methods for manipulating polynomials expressed in generalized form, that is, relative to a polynomial basis $\{p_i(x)\}$ satisfying a three-term recurrence relation. This includes computing the Euclidean remainders as well as the greatest common divisor of two polynomials $u(x)$ and $v(x)$ of degrees less than $n$. The computational schemes involve solving linear systems with nested Hankel matrices represented in the given generalized basis and they reach the optimal sequential complexity of $O(n^2)$ arithmetical operations. |

Gemignani, Luca
The Lanczos method and its variants can be used to solve efficiently the rational interpolation problem. In this paper we present a suitable fast modification of a general look-ahed version of the Lanczos process in order to deal with polynomials expressed in the Chebyshev orthogonal basis. The proposed approach is particularly suited for rational interpolation at Chebyshev points, that is, at the zeros of Chebyshev polynomials. In fact, in this case it overcomes some of the numerical difficulties which limited the applicability of the look-ahed Lanczos process for determining the coefficients both of the numerators and of the denominators with respect to the standard power basis. |

Gemignani, Luca
Fast orthogonalization schemes for m\times n Vandermonde matrices V=(z_i^j), introduced by Demeure, are extended to compute both a $QR$ factorization and an inverse QR factorization of Vandermonde-like matrices P=(p_j(z_i)) where the polynomials p_j(z) satisfy a three-term recurrence relation. In this way we are able to solve least squares (LS) problems of the form \begin{eqnarray*} {\rm minimize } \ ||{\bf b}-P{\bf x}||_2 \end{eqnarray*} using only O(mn) arithmetical operations and O(m) storage. \end{abstract} \bigskip |

Di Miele, Federico and Gallo, Giorgio
Most often, space is the scarce resource in Bus Depots located in congested urban areas: vehicles arriving at the end of their trips are packed together in a rather small space. That implies that, when a vehicle has to leave to start a new trip, most often, other vehicles have to be moved to clear the way. Here a new model, based on "Min Non Crossing Matching" and "Generalized Assignment", is presented to allocate parking spaces to vehicles in a Depot in order to minimize the shunting cost. The results obtained applying our approach to a real life case are discussed. |

Cappanera, Paola and Gallo, Giorgio and Maffioli, Francesco
The problem of simultaneously locating obnoxious facilities and routing obnoxious materials between a set of built-up areas and the facilities is addressed. Obnoxious facilities are those facilities which cause nuisance to people as well as to the environment i.e. dump sites, chemical industrial plants, electric power supplier networks, nuclear reactors and so on. A discrete combined location-routing model, which we refer to as Obnoxious Facility Location and Routing model (OFLR), is defined. OFLR is a np-hard problem for which a Lagrangean heuristic approach is presented. The Lagrangean relaxation proposed allows to decompose OFLR into a Location subproblem and a Routing subproblem; such subproblems are then strenghtened by adding suitable inequalities. Based on this Lagrangean relaxation two simple Lagrangean heuristics are provided. An effective Branch and Bound algorithm is then presented, which aims at reducing the gap between the above mentioned lower and upper bounds. Our Branch and Bound exploits the information gathered while going down in the enumeration tree in order to solve efficiently the subproblems related to other nodes. This is accomplished by using a bundle method to solve at each node the Lagrangean dual. Some variants of the proposed Branch and Bound method are defined in order to identify the best strategy for different classes of instances. A comparison of computational results relative to these variants is presented. |

Cappanera, Paola and Frangioni, Antonio
An effective Branch and Bound algorithm based on Lagrangean heuristic is presented, which aims at reducing the gap between the lower and upper bounds. The Branch and Bound method proposed exploits the information gathered while going down in the enumeration tree in order to solve efficiently the subproblems related to other nodes. This is accomplished by using a bundle method to solve at each node the Lagrangean dual. A discrete combined location-routing model, which we refer to as Obnoxious Facility Location and Routing model (OFLR), is used as a benchmark to validate the efficiency of the method proposed. Such model simultaneously locates obnoxious facilities and routes obnoxious materials between a set of built-up areas and the facilities. Obnoxious facilities are those facilities which cause nuisance to people as well as to the environment i.e. dump sites, chemical industrial plants, electric power supplier networks, nuclear reactors and so on. OFLR is a particular case of the Obnoxious Network Design problem which occurs when also an obnoxious network has to be designed such as the case of electric power supplier networks. These are meaningful problems given the industrial development and the increasing importance of environmental issues in our society. The use of the B&B to solve this latter problem is under development. |

Prencipe, Giuseppe
Over the past few years, the focus of robotic design has been moving from a scenario where few, specialized (and expensive) units were used to solve a variety of tasks, to a scenario where many, general purpose (and cheap) units were used to achieve some common goal. Consequently, part of the focus has been to better understand how to efficiently coordinate and control a set of such |

Corradini, Andrea and Heckel, Reiko and Montanari, Ugo
In this paper we address the issue of providing a structured coalgebra presentation of transition systems with algebraic structure on states determined by an equational specification $\Gamma$. More precisely, we aim at representing such systems as coalgebras for an endofunctor on the category of $\Gamma$-algebras. The systems we consider are specified by using |

Pallottino, Stefano and Scutellà, Maria Grazia
We propose an algorithm which reoptimizes shortest paths in a very general situation, that is when any subset of arcs of the input graph is affected by a change of the arc costs, which can be either lower or higher than the old ones. This situation is more general than the ones addressed in the literature so far. |

Cappanera, Paola and Trubian, Marco
We consider an extension of the 0-1 multidimensional knapsack problem in which there are greater-than-equal inequalities, called demand constraints, besides the standard less-than-equal constraints. Moreover the objective function coefficients are not constrained in sign. This problem is worth considering because it is embedded in the models of practical applications, because it has an intriguing combinatorial structure and because it appears to be a challenging problem for commercial ILP solvers. Our approach is based on a nested tabu search algorithm in which neighbourhoods of different structure are exploited. A first higher level tabu search is carried on in which both feasible and unfeasible regions are explored. In this diversification phase an oscillation method proceeds alternating between constructive and destructive steps. Occasionally, a lower level tabu search which implements an intensification strategy is applied. In this second phase only feasible solutions are analysed. The algorithm has been tested on a wide set of instances. Computational results are discussed. |

Bigi, Giancarlo and Pappalardo, Massimo
Since a vector program has not just an optimal value but a set of optimal ones, the analysis of duality gaps requires at least the comparison between two sets of vector optimal values. Relying only on a weak duality property, the situations that can occur are analysed in detail and some concepts of duality gap are proposed. |

Dell'Olmo, P. and Hansen, P. and Pallottino, Stefano and Storchi, G.
In this work we study various uniform $k$-partition problems which consist in partitioning a collection of $m$ sets, each of them of cardinality $k$, into $k$ sets of cardinality $m$ such that each of these sets contains exactly one element coming from every original set. The problems differ according to the particular measure of ``set uniformity'' to be optimized. Most of the studied problems are polynomial and the corresponding solution algorithms are provided. A few of them are proved to be NP-hard. Examples of applications to scheduling and routing problems are also discussed. |

Coppola, Massimo and Vanneschi, Marco
We show how to apply a Structured Parallel Programming methodology based on skeletons to Data Mining problems, reporting several results about three commonly used mining techniques, namely association rules, decision tree induction and spatial clustering. We analyze the structural patterns common to these applications, looking at application performance and software engineering efficiency. Our aim is to clearly state what features a Structured Parallel Programming Environment should have to be useful for parallel Data Mining. Within the skeleton-based PPE SkIE that we have developed, we study the different patterns of data access of parallel implementations of Apriori, C4.5 and DBSCAN. We need to address large partitions reads, frequent and sparse access to small blocks, as well as an irregular mix of small and large transfers, to allow efficient development of applications on huge databases. We examine the addition of an object/component interface to the skeleton structured model, to simplify the development of environment-integrated, parallel Data Mining applications. |

Cappanera, Paola and Gallo, Giorgio
The problem of finding a work assignment for crew members of an airline in a given time horizon, is addressed. In the literature such a problem is usually referred to as the Airline Crew Rostering problem and consists of constructing monthly schedules for crew members by assigning them pairings, rest periods, leaves, training periods, union activities, and so forth, so as to satisfy the collective agreements and security rules. We formulate the Airline Crew Rostering problem as a 0-1 Multicommodity Flow problem where each employee corresponds to a commodity; determining a monthly schedule for an employee is the same as computing a path on a suitably defined graph while satisfying the clauses of the union conventions and distributing the workload evenly among the employees. Each path on the graph alternates between activity arcs which represent the activities to be covered in the programming period, and compatibility arcs linking together pairs of activities which can be assigned consecutively to the same employee. A preprocessing phase is performed which reduces the dimension of the graph. In order to tighten the linear programming formulation of our model we propose some families of valid inequalities which have proven to be computationally very effective; some of them have the characteristic that can be treated implicitly in the construction of the graph. Computational results obtained with a commercial integer programming solver (CPLEX) are analyzed. Finally, some decomposition techniques are identified which exploit the particular structure of the problem, as an alternative solution approach. |

Gervasi, Vincenzo and Prencipe, Giuseppe
Control and coordination of a set of autonomous vehicles that can freely move on a plane is a widely studied topic in robotics. The focus on this kind of problem has grown in recent years because of the increased interest in studying systems populated by many, simple units, instead of few, powerful ones. In particular, these units simply observe the environment by using their sensors, and react following simple rules: the reaction to the environmental stimuli is called the {\em behavior} of the unit. In this paper we study the {\em flocking problem}: a set of mobile units are required to follow a leader unit while keeping a predetermined formation (i.e., they are required to move in flock, like a group of soldiers). Moreover, the units in the flock do not know beforehand the path the leader will take: their task is just to follow him wherever he goes, and to keep the formation while moving. |

Pisanti, Nadia and Marangoni, Roberto and Ferragina, Paolo and Frangioni, Antonio and Luccio, Fabrizio and Savona, Antonio
Genes belonging to the same organism are called paralogs when they show a significant similarity in the sequences, even if they have a different biological function. It is an emergent biological paradigm that the families of paralogs in a genome derive from a mechanism of iterated gene duplication-with-modification. In order to investigate the evolution of organisms, it can be useful to infer the duplications that have taken place starting from an hypothetical original gene, and that have led to the current paralog genes family. This information can naturally be represented in a paralogy tree. Here we present a method, called PaTre, to generate a paralogy tree from a family of paralog genes. PaTre uses new techniques motivated by the specific application. The reliability of the inferential process is tested by means of a simulator that implements different hypotheses on the duplication-with-modification paradigm, and checked on real data. |

Franceschini, Gianni
Let k be a positive integer. A k-cycle is a connected graph in which each vertex has degree greater than k. A k-dense forest is a graph for which no subgraph is a k-cycle; if a k-dense forest is connected, then it is k-dense tree. A k-leaf is a vertex of a k-dense forest with degree less than or equal to k. Any k-dense forest has at least one k-leaf. If a k-leaf is removed, the resulting graph is still a k-dense forest. This fact is on the basis of another characterization of k-dense forests which make use of the concept of k-elimination, a particular ordering of removal for the vertices of a k-dense forest. In this paper, we study some new properties of the complete k-dense trees, a subclass of the one of k-dense trees. Such properties reveal some interesting relations between the class of complete k-dense trees and the widely studied class of k-trees. |

Scaparra, Maria Paola and Scutellà, Maria Grazia
As evidenced by the remarkable diversity of real world applications which have been modeled and solved as location problems, the field studying the optimal location of facilities is a very interdisciplinary and broad research area. The purpose of this paper is to fit the large variety of location models within a general unified framework, which arises from the description of the three buildings blocks of location problems, namely: facilities, customers, and locations. We provide evidences of how a particular problem specification can be stated mathematically as an optimization problem by opportunely combining into a compact and workable model the main features that characterize and relate these three elements. |

Accettella, Carlo J. and Del Corso, Gianna M. and Manzini, Giovanni
We consider the problem of inverting block circulant with circulant blocks BCCB matrices with entries over the field Z_p. This problem arises in the study of of two-dimensional linear cellular automata. Since the standard reduction to diagonal form by means of FFT has some drawbacks when working over Z_p, we solve this problem by transforming it into the equivalent problem of inverting a circulant matrix with entries over a suitable ring R. We show that a BCCB matrix of size mn can be inverted in O(m n c(m,n)) operations in Z_p, where c is a low degree polynomial in log m and log n. |

Franceschini, Gianni
Let k be a positive integer. A k-cycle is a connected graph in which each vertex has degree greater than k. A k-dense forest is a graph for which no subgraph is a k-cycle; if a k-dense forest is connected, then it is k-dense tree. A k-leaf is a vertex of a k-dense forest with degree less than or equal to k. Any k-dense forest has at least one k-leaf. If a k-leaf is removed, the resulting graph is still a k-dense forest. This fact is on the basis of another characterization of k-dense forests which make use of the concept of k-elimination, a particular ordering of removal for the vertices of a k-dense forest. In this paper, we consider some NP-complete problems in the area of Graph Theory. For each of these problems we study the restriction to the class of complete k-dense trees, for a generic integer k, and we try to establish whether there are some values of k for which the restriction of the original problem is in P and some other values of k for which this restriction still remains in the class of NP-complete problems. |

Bevilacqua, Roberto and Bozzo, Enrico and Menchi, Ornella
The reconstruction of diagnostic images obtained from SPECT (Single Photon Emission Computed Tomography) requires numerical models which involve structured matrices: Toeplitz and block circulant. In this note an algorithm is proposed, with the aim of exploiting the computational properties enjoyed by these structures. |

Bigi, Giancarlo and Castellani, Marco
Exploiting different tangent cones, many derivatives for set-valued functions have been introduced and considered to study optimality. The main goal of the paper is to address a general concept of K-epiderivative and to employ it to develop a quite general scheme for necesary optimality conditions in set-valued problems. |

Luccio, Fabrizio and Mesa Enriquez, Antonio and Olivares Rieumont, P. and Pagli, Linda
The problem of exact subtree matching is the one of deciding if a pattern tree P of m vertices is a subtree of a text tree T of n vertices, m<= n, and, in the affirmative case, finding all the occurrences of P in T. We consider ordered and non-ordered rooted trees with labeled vertices (the case of unlabeled vertices is a special case of this), and show how the problem can be solved in Theta(m + log n) time once a proper data structure is built for T in a preprocessing phase which requires Theta(n) time and space. Regarding T as an immutable text on which several queries are made, P as the contents of one such query, and assuming m=o(n), we can speak of search time sublinear in the size m+n of the overall structure. The number of occurrences of P in T does not appear in the search time because all such occurrences can be directly reconstructed from a constant output information. |

Baldan, Paolo and Mancarella, Paolo and Raffaetà, Alessandra and Turini, Franco
In this paper we introduce MuTACLP, a knowledge representation language which provides facilities for modeling and handling temporal information, together with some basic operators for combining different temporal knowledge bases. The proposed approach stems from two separate lines of research: the general studies on meta-level operators on logic programs introduced by Brogi et al. and Temporal Annotated Constraint Logic Programming (TACLP) defined by Fruehwirth. In MuTACLP atoms are annotated with temporal information which are managed via a constraint theory, as in TACLP. Mechanisms for structuring programs and combining separate knowledge bases are provided through meta-level operators. The language is given two different and equivalent semantics, a top-down semantics which exploits meta-logic, and a bottom-up semantics based on an immediate consequence operator. |

Baldan, Paolo and Corradini, Andrea and Ehrig, H. and Heckel, Reiko
In order to model the behaviour of open concurrent systems by means of Petri nets, we introduce open Petri nets, a generalization of the ordinary model where some places, designated as open, represent an interface of the system towards the environment. Besides generalizing the token game to reflect this extension, we define a truly concurrent semantics for open nets by extending the Goltz-Reisig process semantics of Petri nets. We introduce a composition operation over open nets, characterized as a pushout in the corresponding category, suitable to model both interaction through open places and synchronization of transitions. The deterministic process semantics is shown to be compositional with respect to such composition operation. If a net Z3 results as the composition of two nets Z1 and Z2, having a common subnet Z0, then any two deterministic processes of Z1 and Z2 which ``agree'' on the common part, can be ``amalgamated'' to produce a deterministic process of Z3. Vice versa, any deterministic process of Z3 can be decomposed into processes of the component nets. The amalgamation and decomposition operations are shown to be inverse to each other, leading to a bijective correspondence between the deterministic processes of Z3 and pair of deterministic processes of Z1 and Z2 which agree on the common subnet Z0. Technically, our result is similar to the amalgamation theorem for data-types in the framework of algebraic specification. A possible application field of the proposed constructions and results is the modeling of interorganizational workflows, recently studied in the literature. This is illustrated by a running example. |

Bernasconi, Anna and Ciriani, Valentina and Luccio, Fabrizio and Pagli, Linda
In the framework of SPP minimization, a three level logic synthesis technique developed in recent years, we exploit the ``regularity'' of Boolean functions to decrease minimization time. Our main results are: 1) the regularity of a Boolean function $f$ of $n$ variables is expressed by its \emph{autosymmetry degree} $k$ (with $0 \le k\le n$), where $k=0$ means no regularity (that is, we are not able to provide any advantage over standard synthesis); 2) for $k\geq 1$ the function is {\em autosymmetric}, and a new function $f_k$ is identified in polynomial time; $f_k$ is ``equivalent'' to, but smaller than $f$, and depends on $n-k$ variables only; 3) given a minimal SPP form for $f_k$, a minimal SPP form for $f$ is built in linear time; 4) experimental results show that 61\% of the functions in the classical \textsc{Espresso} benchmark suite are autosymmetric, and the SPP minimization time for them is critically reduced; we can also solve cases otherwise practically intractable. We finally discuss the role and meaning of autosymmetry. |

Ciriani, Valentina and Ferragina, Paolo and Luccio, Fabrizio and Muthukrishnan, S.
Data warehouses are increasingly storing and managing large scale string data, and dealing with large volume of transactions that update and search string databases. Motivated by this context, we initiate the study of {\em self-adjusting} data structures for string dictionary operations, that is, data structures that are designed to be efficient on an entire sequence of operations rather than individual string operations. Furthermore, we study this problem in the external memory model where string data is too massive to be stored in main memory and has to reside in disks; each access to a disk page fetches $B$ items, and the cost of the operations is the number of pages accessed. We show that given $n$ strings $S_{1},\ldots,S_{n}$ of total length $\sum_i |S_i|=N$, a sequence of $m$ string searches $S_{i_{1}},S_{i_{2}},\ldots,S_{i_{m}}$ takes $ O(\sum_{j=1}^{m}(\frac{|S_{i_{j}}|}{B})+\sum _{i=1}^{n}(n_{i}\log_{B}\frac{m}{n_{i}}))$ amortized expected I/Os, where $n_{i}$ is the number of times $S_{i}$ is queried. Inserting or deleting a string $S$ takes $O(\frac{|S|}{B}+\log_{B}n)$ amortized expected I/Os. This result is the analog of what is known as the Static Optimality Theorem~\cite{Sleator-Tarjan} proved by Sleator and Tarjan in their classic splay trees paper for a dictionary of numerical values; here, it has been generalized, for the first time to string data, to string operations, and to the external memory model. This performance is achieved not by traditional ``splay'' operations on search trees as in~\cite{Sleator-Tarjan}, but by designing a novel self-adjusting data structure based on the well-known skip lists. In addition, we introduce the paradigm of using the main memory (or a part thereof) persistently across operations, in the manner of a cache, to further improve the performance of our self-adjusting skip list. This is quite reasonable in applications where string operations are large in volume. |

Gervasi, Vincenzo
Domain-based parsing is a shallow parsing technique that exploits knowledge about domain-specific properties of terms in order to determine "optimal" parse trees for natural language sentences. Cico is a simple parser using domain-based parsing. It is particularly well suited for parsing natural language sentences of technical nature (e.g., requirements documents for software systems), as in this case several simplifying assumptions hold, and has been used successfully in several experiments in the requirement engineering field. In the first part of this report, we describe formally the parsing algorithm implemented by Cico, and give some experimental data on its performance. In the second part, we provide a complete user manual for cico3, an implementation of the Cico algorithm, and for a number of associated tools. Finally, in the third part, we present some illustrative example taken from real applications. |

Ahuja, Ravindra K. and Orlin, James B. and Pallottino, Stefano and Scutellà, Maria Grazia
In this paper, we study dynamic shortest path problems, which is to determine a shortest path from a specified source node to every other node in the network where arc travel times change dynamically. We consider two problems: the minimum time walk problem (which is to find a walk with the minimum travel time) and the minimum cost walk problem (which is to find a walk with the minimum travel cost). The minimum time walk problem is known to be polynomially solvable for a class of networks called FIFO networks. This paper makes the following contributions: (i) we show that the minimum cost walk problem is an NP-complete problem; (ii) we develop a pseudopolynomial-time algorithm to solve the minimum cost walk problem (for integer travel times); and (iii) we develop a polynomial-time algorithm for the minimum time walk problem arising in road networks with traffic lights. |

Ferragina, Paolo and Manzini, Giovanni
In this paper we design two compressed data structures for the full-text indexing problem. These data structures support efficient substring searches within the indexed text $T$ using roughly the same space required to store $T$ in compressed form. Our first compressed data structure retrieves the $occ$ occurrences of a pattern $P[1,p]$ in $T[1,n]$ in $O(p + occ\log^{1+\epsilon} n)$ time and uses at most $5n H_k(T) + o(n)$ bits of storage, where $H_k(T)$ is the $k$-th order empirical entropy of $T$. This space occupancy is $Theta(n)$ bits in the worst case and $o(n)$ bits for compressible texts. Our data structure exploits the relationship between the suffix array and the Burrows-Wheeler compression algorithm. Our second compressed data structure achieves $O(p+occ)$ query time and uses $O(n H_k(T)\log^\epsilon n) + o(n)$ bits of storage. In the worst case the space occupancy is $o(n\log n)$ bits which is asymptotically smaller than the space occupancy of suffix trees and suffix arrays. This second data structure exploits the interplay between two compressors: the Burrows-Wheeler algorithm and the LZ78 algorithm. |

Gallo, Giorgio
After a discussion on the relevance of ethics in Operations Research, two approaches to the ethical discourse, one based on rules and the other based on principles and values, are analyzed. Then, two ethical principles, which can help O.R. researchers and practitioners in their activity are discussed in some detail. The first is the "responsibility principle" , proposed in a more general context by the philosopher Hans Jonas, which in our case suggests to take into account in our work not only the point of view of the "client" , but also the point of view of all the "stakeholders" , i.e. the ones who can directly or indirectly be affected by the results of our activity. The second, which can be called the "sharing and cooperation principle" , calls for a more open distribution of the results of our research activity, whether they are ideas, algorithms or software. |

Bozzo, Enrico and Fasino, Dario and Menchi, Ornella
Mixed and componentwise condition numbers are useful in order to understand stability properties of algorithms for solving structured linear systems. The DFT (discrete Fourier transform) is an essential building block of these algoritms. We obtain precise estimates of mixed and componentwise condition numbers of the DFT. To this end we explicitly compute certain unimodular vectors (a complex vector is said unimodular if the modulus of all its entries is equal to one) whose DFT is again unimodular. |

Amato, Gianluca and Scozzari, Francesca
We cope with the problem of correctness and optimality for logic programs analysis by abstract interpretation. We refine the Cortesi and File' goal-dependent framework by fixing a result of correctness and introducing two specialized operators for forward and backward unification. We provide the best correct abstractions of the concrete operators in the case of set-sharing analysis. We show that the precision of the overall analysis is strictly improved and that, in some cases, we gain precision w.r.t. more complex domains involving linearity and freeness information. |

Montangero, Carlo and Semini, Laura
We introduce a temporal logic to reason on global applications. First, we define a modal logic for localities that embeds the local theories of each component into a theory of the distributed states of the system. We provide the logic with a sound and complete axiomatization. Then, we extend the logic with temporal operators. The contribution is that it is possible to reason about properties that involve several components, even in the absence of a global clock, as required in an asynchronous setting. We support our proposal by working out an example, a simple secure communication system. |

Ciriani, Valentina
Three-level logic SPP forms are OR of AND of EXORs expressions. In the framework of SPP minimization we give a new algebraic definition of SPP expressions using affine spaces. The main problems of SPP model are: the ``hard to digest'' SPP theory; the time required for minimization, which is still high; and the unbounded fan-in EXOR gates in the form. Consequently, our main results in this paper are: 1) we rephrase the SPP theory using well known algebraic structures (vector and affine spaces) to obtain an easier description of pseudocubes, which play the same role as cubes in standard minimization; 2) we describe a new canonical representation of pseudocubes leading to a more efficient data structure for SPP minimization; 3) we introduce a novel form, called k-SPP form}, where the number of literal in the EXOR factors is upper bounded by a chosen constant $k$, and show how to modify the SPP algorithms for computing the minimal k-SPP form efficiently. Finally, we perform an extensive set of experiments on classical benchmarks aimed at validating the new approach. |

Pisanti, Nadia and Crochemore, Maxime and Grossi, Roberto and Sagot, Marie-France
Motif inference is at the heart of several time-demanding computational tasks, such as in molecular biology, data mining and identification of structured motifs in sequences, and in data compression, to name a few. In this scenario, a motif is a pattern that appears repeated at least a certain number of times (the quorum), to be of interest. The pattern can be approximated in that some of its characters can be left unspecified (the don't cares). Motif inference is not aimed at searching a given pattern but, rather, at discovering all the possible patterns that appear as motifs in the given input string. The combinatorial explosion of these patterns makes their discover an exponential-time computation. For this, the notion of basis has been recently introduced to succinctly represent all of them within reasonable time and space bounds. The goal of the paper is to shed light on the state of the art for this emerging field and to add further properties to what is currently known. |

Pallottino, Stefano and Sechi, G. M. and Zuddas, Paola
In this paper we present a scenario analysis approach for water system planning and management under conditions of climatic and hydrological uncertainty. The scenario analysis approach examines a set of statistically independent hydrological scenarios, and exploits the inner structure of their temporal evolution in order to obtain a "robust" decision policy, so that that the risk of wrong decisions is minimised. In this approach, uncertainty is modelled by a scenario-tree in a multistage environment, which includes different possible configurations of inflows in a wide time-horizon. In this paper we propose a Decision Support System (DSS) that performs scenario analysis by identifying trends and essential features on which to base a robust decision policy. The DSS prevents obsolescence of optimiser codes, exploiting standard data format, and a graphical interface provides easy data-input and results analysis for the user. Results show that scenario analysis could be an alternative approach to stochastic optimisation when no probabilistic rules can be adopted and deterministic models are inadequate to represent uncertainty. Moreover, experimentation for a real water resources system in Sardinia, Italy, shows that practitioners and end-users can adopt the DSS with ease. |

Ferrari, Gianluigi and Tuosto, Emilio
We introduce a process calculus that contains constructs to express and program resource negotiations in systems of mobile processes. The calculus proposed is an extension of Distributed $\pi$-calculus~ \cite{HR98} and, therefore, it provides process mobility in networks that may dynamically change their topology. Negotiation takes place between processes, asking credits to achieve their purposes, and locations, that release credits depending on their availability. We describe a type system for the calculus which guarantees that well-typed processes cannot fail because of misuse of credits over resources. The safety result is more involved due to process mobility: A migrating process should not access locations that do not guarantee safety. |

Ghelli, Giorgio and Conforti, Giovanni
We study decidability of a logic for describing processes with restricted names. We choose a minimal fragment of the Ambient Logic, but the techniques we present should apply to every logic which uses Cardelli and Gordon revelation and hiding operators, and Gabbay and Pitts freshness quantifier. We start from the static fragment of ambient logic that Calcagno Cardelli and Gordon proved to be decidable. We prove that the addition of a hiding quantifier makes the logic undecidable. Hiding can be decomposed as freshness plus revelation. Quite surprisingly, freshness alone is decidable, but revelation alone is not. |

Gentile, G. and Nguyen, S. and Pallottino, Stefano
Passengers on a transit network with common lines are often faced with the problem of choosing between either to board the arriving bus or to wait for a faster one. Many existing assignment models are based on the classical assumption that at a given stop passengers board the first arriving carrier of a certain subset of available lines. It has been shown that, in this case, an {\it optimal subset} of lines that minimizes the passenger travel times exists and is easily determined if the headway distributions are exponential. However, when on-line information on future arrivals of buses are posted at various stops, it is unlikely that the above classical assumption holds. Passengers may choose in this case to board a line that offers the best combination of waiting time and expected remaining time. We propose in this paper a general framework for determining the probability of boarding each available line at a stop when on-line information on bus arrivals is provided to passengers. We will also show that the classical case without on-line information may be interpreted as a particular instance of the proposed framework, and thus extend the results obtained with exponential distributions to generaldistributions. Numerical examples, based on various headway distributions proposed in the literature, will be used to illustrate the combined impacts of the regularity of transit lines and of the availability of information regarding bus arrivals at stops on the line loads and on the passenger travel times. |

Baldan, Paolo and Bracciali, Andrea and Bruni, Roberto
Behavioural equivalences on open systems are usually defined by comparing system behaviour in all environments. Here, we introduce a hierarchy of behavioural equivalences for open systems in the setting of process calculi, building on a symbolic approach proposed in a previous paper. The hierarchy comprises both branching, bisimulation-based, and non-branching, trace-based, equivalences. Symbolic equivalences are amenable to effective analysis techniques (e.g., the symbolic transition system is finitely branching under mild assumptions), which result to be correct, but often not complete due to redundant information. Two kinds of redundancy, syntactic and semantic, are discussed and one class of symbolic equivalences is identified that deals satisfactorily with syntactic redundant transitions, which are a primary source of incompleteness. |

Bevilacqua, Roberto and Del Corso, Gianna M.
This paper concerns the study of a unitary transformation from a generic symmetric matrix $A$ into a semiseparable matrix. The problem is studied both theoretically and from an algorithmic point of view. In particular, we first give a proof of the existence of such a transformation and then we discuss the uniqueness of such transformation proving an Implicit-$Q$ Theorem for semiseparable matrices. Finally, we study structural properties of the factors of the $QR$-decomposition of a semiseparable matrix. These properties allows us to design a method based on the $QR$ iterations applied to a semiseparable matrix for reducing a symmetric matrix to semiseparable form. This method has the same asymptotic cost of the reduction of a symmetric matrix to tridiagonal form. Once the transformation has been accomplished, if one is interested in computing the eigenvalues each further $QR$ iteration can be done in linear time. |

Franceschini, Gianni
In this paper we give a positive answer to the long-standing problem of finding an in-place sorting algorithm performing $O(n\log n)$ comparisons and $O(n)$ data moves in the worst case. So far, the better in-place sorting algorithm with $O(n)$ moves was proposed by Munro and V. Raman. Their algorithm requires $O(n^{1+\epsilon})$ comparisons in the worst case, for any $\epsilon > 0$. Later, Katajainen and Pasanen discovered the first in-place algorithm with $O(n\log n)$ comparisons and $o(n\log n)$ moves, namely $O(n\log n/\log\log n)$ moves. Our algorithm achieves the same number of comparisons but with $O(n)$ moves, which is asymtotically optimal. |

Romani, Francesco
MAJA is a package for MAtrix computation in JAva. The package allows defining various kinds of structured matrices operating in an unified way; General, Symmetric, Sparse, Band, Toeplitz, Circulant matrices are implemented among the others. It is possible also to operate on Block matrices, Additive Structures, Tensor Products, Implicit Inverses and so on. The package organization is similar to the Java Collections API. It is easy to exploit the presence of more than one processor without affecting the final user code. The source code is platform independent and has been tested on Linux, Windows, and Macintosh computers; test benchmarks are presented, both on single and double-processor computers. Also examples of implementation of involved linear algebra algorithms are presented. |

Ferragina, Paolo and Manzini, Giovanni
In this paper we provide the first compression booster that turns a zeroth order compressor into a more effective $k$-th order compressor without any loss in the time and space efficiency. More precisely, let $\alg$ be an algorithm that compresses a string~$s$ within $\lambda |s| \hlk{0}(s) + \mu$ bits of storage in $\Ob{T(|s|)}$ time, where $\hlk{0}(s)$ is the zeroth order entropy of the string $s$. Our booster improves $\alg$ by compressing~$s$ within $\lambda |s| \hlk{k}(s) + \log_2 |s| + g_k$ bits still using $O(T(|s|))$ time, where $\hlk{k}(s)$ is the $k$-th order entropy of~$s$. The idea of a ``compression booster'' has been very recently introduced by Giancarlo and Sciortino in [CPM 2003]. They combined the Burrows-Wheeler Transform with dynamic programming and achieved our same compression bound but with running time $O(T(|s|)) + \Omega(|s|^2)$. We start from the same premises of Giancarlo and Sciortino, but instead of using dynamic programming we design a linear time optimization algorithm based on novel structural properties of the Burrows-Wheeler Transform. |

D'ambra, P. and Danelutto, Marco and di Serafino, D. and Lapegna, M.
In this paper we provide a view of the design and development activity concerning advanced environments for parallel and distributed computing. We start from assessing the main issues driving this research track, in the areas of hardware and software technology and of applications. Then, we identify some key concepts, that can be considered as common guidelines and goals in the development of modern advanced environments, and we come up with a ``classification'' of these environments into two main classes: Programming Environments and Problems Solving Environments. Both classes are widely discussed, in light of the key concepts previously outlined, and several examples are provided, in order to give a picture of the current status and trends. |

Billi, Carolina and Gentile, G. and Nguyen, S. and Pallottino, Stefano
In questo lavoro viene analizzato in dettaglio il fenomeno dell'attesa a una fermata di un servizio di trasporto collettivo urbano, tenendo conto anche della regolarità del passaggio dei veicoli di una stessa linea alla fermata. Viene proposto un modello, detto "dinamico" , che permette di eliminare le contraddizioni insite nel modello usualmente utilizzato. Viene mostrato anche un approccio algoritmico che permette di evitare la propagazione degli errori di approssimazione. Viene anche mostrato un adattamento del modello per quei servizi di trasporto collettivo in cui vi sia informazione all'utenza alle fermate, mediante le "paline intelligenti". |

Ferrari, Gianluigi and Montangero, Carlo and Semini, Laura and Semprini, Simone
We introduce a high level declarative model for mobile and ad hoc computing. The model takes the form of a logical theory in a Unity-like linear time temporal logic and is specifically designed to deal with Network Awareness, Mobility, and Coordination. A companion design methodology based on formal refinements is also introduced in this paper. The model and its design methodology are validated through an illustrative case study, namely a system that coordinates the activities of mobile components over an ad hoc network. |

Hammer, B. and Micheli, Alessio and Sperduti, Alessandro
Self-organization constitutes an important paradigm in machine learning with successful applications e.g. for data- and web-mining. However, so far most approaches have been proposed for data contained in a fixed and finite dimensional vector space. We will focus on extensions for more general data structures like sequences and tree structures in this article. Various extensions of the standard self-organizing map (SOM) to sequences or tree structures have been proposed in the literature: the temporal Kohonen map, the recursive SOM, and SOM for structured data (SOMSD), for example. These methods enhance the standard SOM by recursive connections. We define in this article a general recursive dynamic which enables the recursive processing of complex data structures based on recursively computed internal representations of the respective context. The above mechanisms of SOMs for structures are special cases of the proposed general dynamic, furthermore, the dynamic covers the supervised case of recurrent and recursive networks, too. The general framework offers a uniform notation for training mechanisms such as Hebbian learning and the transfer of alternatives such as vector quantization or the neural gas algorithm to structure processing networks. The formal definition of the recursive dynamic for structure processing unsupervised networks allows the transfer of theoretical issues from the SOM literature to the structure processing case. One can formulate general cost functions corresponding to vector quantization, neural gas, and a modification of SOM for the case of structures. The cost functions can be compared to Hebbian learning which can be interpreted as an approximation of a stochastic gradient descent. We derive as an alternative the exact gradients for general cost functions which can be computed in several ways similar to well-known learning algorithms for supervised recurrent and recursive networks. We afterwards discuss general design issues which should be taken into account when choosing internal representations and which limit the applicability of specific choices in concrete situations: We investigate the necessary cardinality of the set of internal representations, the tolerance with respect to noise, and the necessary of globality of processing. Finally, we discuss the issue of topology preservation of structure processing self-organizing maps and derive an explicit metric for a specific situation for SOMSD. |

Bruni, Roberto and Meseguer, J. and Montanari, Ugo and Sassone, Vladimiro
The algebraic models of computation for contextual nets that have been proposed in the literature either rely on non-free monoid of objects, or introduce too many behaviors that must be somewhat filtered out. In this report, we exploit partial membership equational logic to define a suitable theory of models, where the meaningful concurrent computations can be selected by means of membership predicates. |

Dell'Olmo, P. and Hansen, P. and Pallottino, Stefano and Storchi, G.
We study various uniform $k$-partition problems which consist in partitioning $m$ sets, each of cardinality $k$, into $k$ sets of cardinality $m$ such that each of these sets contains exactly one element from every original set. The problems differ according to the particular measure of "set uniformity" to be optimized. Most problems are polynomial and corresponding solution algorithms are provided. A few of them are proved to be NP-hard. Examples of applications to scheduling and routing problems are also discussed. |

Ambriola, Vincenzo and Gervasi, Vincenzo
This paper presents Circe, an environment for the analysis of natural language requirements. Circe is presented in terms of its architecture, based on a transformational paradigm. Details are given for the various transformation steps, including (i) a novel technique for parsing natural language requirements, and (ii) an expert system based on modular agents, embodying intensional knowledge about software systems in general. Some of the features of the environment are shown by means of an example. Various stages of requirements analysis are covered, from initial sketches to pseudo-code and UML models. |

Frangioni, Antonio and Manca, Antonio
In the last two decades, a number of algorithms for the linear single-commodity Min Cost Flow problem (MCF) have been proposed, and several efficient codes are available that implement different variants of the algorithms. The practical significance of the algorithms has been tested by comparing the time required by their implementations for solving ``from scratch'' instances of (MCF), of different classes, as the size of the problem (number of nodes and arcs) increases. However, in many applications several closely related instances of (MCF) have to be sequentially solved, so that reoptimization techniques can be used to speed up computations, and the most attractive algorithm is the one which minimizes the total time required to solve all the instances in the sequence. In this paper we compare the performances of four different efficient implementations of algorithms for (MCF) under cost reoptimization in the context of decomposition algorithms for the Multicommodity Min Cost Flow problem (MMCF), showing that for some classes of instances the relative performances of the codes doing ``from scratch'' optimization do not accurately predict the relative performances when reoptimization is used. Since the best solver depends both on the class and on the size of the instance, this work also shows the usefulness of a standard interface for (MCF) problem solvers that we have proposed and implemented. |

Bevilacqua, Roberto and Bozzo, Enrico and Menchi, Ornella
The problem of reconstructing a two-dimensional tomographic medical image is considered. The emission function $f(P)$ is defined on a plane circular domain $\Omega$ and has to be reconstructed from data collected by SPECT. In this paper the natural pixel discretization approach is used, for which suitable basis functions must be chosen. We show that by choosing the basis on a polar grid a highly structured coefficient matrix can be obtained. Numerical experimentation validates the efficiency of this approach. |

Anderegg, Luzi and Cieliebak, Mark and Prencipe, Giuseppe
The Weber point of a given point set $P$ is a point in the plane that minimizes the sum of all distances to the points in $P$. In general, the Weber point cannot be computed. However, if the points are in specific geometric patterns, then finding the Weber point is possible. We investigate the case of {\em biangular configurations}, where there is a center and two angles $\alpha$ and $\beta$ such that the angles w.r.t. the center between each two adjacent points is either $\alpha$ or $\beta$, and these angles alternate. We show that in this case the center of biangularity is the Weber point of the points, and that it can be found in time linear in the number of points. |

Pisanti, Nadia and Crochemore, Maxime and Grossi, Roberto and Sagot, Marie-France
We investigate the problem of determining the basis of repeated motifs with don't cares in an input string. We give new upper and lower bounds on the problem, introducing a new notion of basis that is provably smaller than (and contained in) previously defined ones. Our basis can be computed in less time and space, and is still able to generate the same set of motifs. We also prove that the number of motifs in all these bases grows exponentially with the quorum, the minimal number of times a motif must appear, which was unnoticed in previous work. We show that a polynomial-time algorithm exists only for fixed quorum. |

Franceschini, Gianni and Grossi, Roberto
In the classical dictionary problem, a set of $n$ distinct keys over an unbounded and ordered universe is maintained under insertions and deletions of individual keys while supporting search operations. An implicit dictionary has the additional constraint of occupying the space merely required by storing the~$n$~keys, that is, exactly $n$ contiguous words of space in total. All what is known is the starting position of the memory segment hosting the keys, as the rest of the information is implicitly encoded by a suitable permutation of the keys. This paper describes the flat implicit tree, which is the first implicit dictionary requiring $O(\log n)$ time per search and update operation. |

Aldinucci, Marco and Danelutto, Marco
In this work we present an operational semantic schema suitable for skeleton-based parallel languages supporting both task and data parallelism. The presented semantic describes both functional and parallel behavior of the skeletal language in a uniform way by means of a labeled transition system. We use Lithium language (namely a Java skeleton framework) as test-bed language to describe the methodology. |

Buscemi, Marzia and Montanari, Ugo
In this paper, we propose a compositional coalgebraic semantics of the pi-calculus based on a novel approach for lifting calculi with structural axioms to coalgebraic models. We equip the transition system of the calculus with permutations, parallel composition and restriction operations, thus obtaining a bialgebra. No prefix operation is introduced, relying instead on a clause format defining the transitions of recursively defined processes. The unique morphism to the final bialgebra induces a bisimilarity relation which coincides with observational equivalence and which is a congruence with respect to the operations. The permutation algebra is enriched with a name extrusion operator delta a' la De Brujin, that shifts any name to the successor and generates a new name in the first variable. As a consequence, in the axioms and in the SOS rules there is no need to refer to the support, i.e., the set of significant names, and, thus, the model turns out to be first order. |

Bozzo, Enrico
There are various ways to prove that, under suitable conditions, the inverse of a Toeplitz matrix can be completely recovered from two of its columns. Toeplitz matrices and their inverses are matrices with displacement structure, i.e., they can be related with matrices of low rank via suitable linear operators. In this note, we show a possible generalization of the result concerning Toeplitz inverses for certain classes of matrices with displacement structure. |

Scaparra, Maria Paola and Pallottino, Stefano and Scutellà, Maria Grazia
This paper investigates the application of very-large neighborhood search techniques for solving the capacitated vertex $p$-center problem. We characterize a local search neighborhood in terms of path and cyclic exchanges of customers among facilities, and exploit principles borrowed from network optimization theory to efficiently detect cost decreasing solutions in such a neighborhood. We complement the multi-exchange methodology with a local reoptimization mechanism specifically designed to perform facility location adjustments. The validity of the proposed approach is supported by empirical investigation and performance comparisons with the commercial code CPLEX. |

Luccio, Fabrizio and Pagli, Linda
A $k$-dense tree, $k$ integer $\geq 1$, is a natural extension of a tree. A leaf is a vertex connected to the rest of the structure through $\leq k$ edges. After the removal of a leaf, the structure still has other leaves. Among other applications, dense trees can be used as interconnection structures in a network. Representing the network in connected graph form $G=(V,E)$, we construct a $k$-dense tree $T$ as a subgraph of $G$ spanning all its vertices. $T$ is then decomposed in $k$ {\em super-root spanning trees} sharing a $k$-clique $C$ of vertices. These trees are edge-disjoint except for the common edges in $C$. Each vertex is labeled with a set of $k$ integers in $\{0,1,\ldots,n-1\}$, $n=|V|$, to set up an interval routing scheme for $G$ along the edges of $T$. The constructions are implemented as distributed algorithms all requiring polynomial time. |

Bruni, Roberto and Laneve, C. and Montanari, Ugo
We discuss the principles of distributed transactions, then we define an operational model which meets the basic requirements and we give a prototyping implementation for it in join-calculus. Our model: (1) extends BizTalk with multiway transactions; (2) exploits an original algorithm, for distributed commit; (3) can deal with dynamically changing communication topology; (4) is almost language-independent. In fact, the model is based on a two-level classification of resources, which should be easily conveyed to distributed calculi and languages, providing them with a uniform transactional mechanism. |

Ahuja, Ravindra K. and Orlin, James B. and Pallottino, Stefano and Scaparra, Maria Paola and Scutellà, Maria Grazia
This paper presents a very large scale neighborhood (VLSN) search algorithm for the capacitated facility location problem with single-source constraints. The neighborhood structures are induced by customer multi-exchanges and by facility moves. We consider both single-customer multi-exchanges, detected on a suitably defined customer improvement graph, and multi-customer multi-exchanges, detected on a facility improvement graph dynamically built through the use of a greedy scheme. Computational results for some benchmark instances are reported, which demonstrate the effectiveness of the approach for solving large-scale problems. |

Vanneschi, Marco
ASSIST (A Software development System based upon Integrated Skeleton Technology) is a proposal of a new programming environment oriented to the development of parallel and distributed high-performance applications according to a unified approach. The main goals are: high-level programmability and software productivity for complex multidisciplinary applications, including data-intensive and interactive software; performance portability across different platforms, including homogenous parallel machines and cluster/Beowulf systems, heterogeneous clusters and network computers, and computational Grids; effective reuse of parallel software; efficient evolution of applications through versions scalable according to the underlying technologies. The proposal is based on the evolution and joining of the software component technology and of the structured parallel programming technology. Component technology must be extended in the direction of parallel and distributed high-performance applications for emerging large-scale heterogeneous platforms, including Grids. According to our experience in structured parallel programming, in ASSIST we intend to overcome some limitations of the classical skeletons approach to improve generality and flexibility, expressive power and efficiency for irregular, dynamic and interactive applications, as well as for complex combinations of task and data parallelism. A new paradigm, called ?parallel module? (parmod), is defined which, in addition to expressing the semantics of several skeletons as particular cases, is able to express more general parallel and distributed program structures. ASSIST allows the programmer to design the applications in the form of generic graphs of parallel components, including both data-flow and nondeterministic reactive computation. Another distinguishing feature is that ASSIST modules are able to utilize external objects, including shared data structures and abstract objects (e.g. CORBA), with standard interfacing mechanisms. In turn, an ASSIST application can be reused and exported as a component for other applications, possibly expressed in different formalisms. A substantial part of our research project is devoted to the architecture and implementation of the support to the new environment. The purpose of this paper is to show the principles of the proposed approach in terms of the programming model (successive papers will deal with the environment implementation and with performance evaluation). Initially the motivations and the rationales for the proposal are discussed in the context of software component technology, large-scale platforms and structured parallel programming. Then the features and the characteristics of the ASSIST programming model are described according to an operational semantics style and using several examples to drive the presentation, to show the expressive power and to discuss the research issues. |

Bacci, Bruno and Pelagatti, Susanna
Parallelism, Optimal Data Distribution/Collection, P3L This document describes the MAP paradigm of parallelism and the problems related to its e cient imple- mentation on a 2D-mesh. In particular, we rst discuss how parallel algorithms ex- ploiting MAP parallelism can be easily expressed by using the P3L Map construct. Then, we discuss an implementation template for a massively parallel architecture with a 2D-mesh topology and Transputer-like processing nodes. The template is asymptotically optimal with respect to the strategies embedded for data distribu- tion, data collection and process allocation |

Montanari, Ugo and Tuosto, Emilio
\spi\ \cite{ag98} has been presented as a (formal) setting used to describe and analyze protocols. Here we will present a short description of the calculus, introduce a new LTS, proving its equivalence to the LTS defined by Abadi and Gordon, and give a representation of the \spi\ in the Edimburg Logical Framework introduced by Haper, Honsell and Plotkin. This Logical Framework provides a setting used to define formal systems based on a general treatment of syntax, rules and proofs by means of a typed $\lambda$-calculus with dependent types. Finally, we will prove that the LF-semantics and the LTS-semantics are equivalent. |

Attardi, Giuseppe and Esuli, Andrea and Simi, Maria
A number of applications require selecting targets for specific contents on the basis of criteria defined by the contents providers themselves. Performing this task requires inverting the direction of search, retrieving queries from documents rather than vice versa, as in ordinary information retrieval. We present a retrieval system called Best Bets that handles this case. Best Bets generalize Information Filtering and encompass a variety of applications including editorial suggestions, promotional campaigns and targeted advertising. In particular we show how to model Google AdWords™ as a Best Bets system. We discuss how to address performance issues in the implementation of Best Bets, in particular efficient search, incremental updates and dynamic ranking. We report about experiments on an implementation of our techniques and a comparison with previous approaches. |

Leoni, Gualtiero and Perugini, Angela
This paper presents and analyzes a join algorithm for cube-connected multiprocessors. The algorithm, called PB_Cube, was developed to exploit as much as possible the hypercube structural characteristics, enabling us to achieve synchronization between processors, maximum parallelism and a reduction of the traffic on the network. Moreover, it doesn't present any bucket overflow and buffer overflow problems. Experimental results indicate the algorithm has linear speedup and scale-up in a significant range. Furthermore its performance is comparable with that of Hybrid Hash [9] join and better than that of Parallel Distributive algorithm [4]. Our algorithm has the advantage of being able to handle non-equijoin. These results become even more interesting when we consider that the costs for Hybrid Hash and Parallel Distributive algorithms are given under hypothesis stronger than ours. |

Ferragina, Paolo and Gullì, Antonio
Recently there has been a surge of commercial interest about novel IR-tools, like Vivisimo or Groxis, that support the user of a search engine in his/her query formulation and query refinement. The basic idea is that the snippets returned by the search engine are grouped into clusters which are then organized in a hierarchy whose nodes are properly labeled via meaningful sentences. Each sentence must capture the "theme" of the snippets contained into the cluster it labels. This way the user is provided with a small, but intelligible, picture of the query answers at various levels of details. Despite this commercial interest, we found just four scientific papers on this topic. None of them achieved results comparable to Vivisimo, that actually represents the state-of-the-art. In the present paper we address this problem in its full generality: labels of variable length for denoting the clusters, labels drawn from the Web snippets as non contiguous sequences of terms, clusters possibly overlapping and organized within a hierarchy. We achieve this results by means of an algorithmic approach that exploits some innovative ideas, at least from the academic side! |

Micheli, Alessio and Sona, Diego and Sperduti, Alessandro
In this note we consider the Contextual Recursive Cascade Correlation model (CRCC), which is able to learn contextual mappings in structured domains. Specifically, we propose a formal characterization of the "context window", i.e., given a state variable, the "context window" is the set of state variables directly or indirectly contributing to its determination. On the basis of this definition, a formal expression describing the "context windows" for the CRCC is derived. The resulting expression is very compact and describes the context window as a function of the state variables in the model, and some topological properties of the structures in input to the model. Moreover, the expression elucidates how the shape of the context window is modified by the introduction of new state variables, i.e., hidden units. For comparison, a similar expression is also derived for the non-contextual Recursive Cascade Correlation model. |

Bigi, Giancarlo
The paper aims to analyse and compare two different approaches to optimality conditions for multiobjective optimization problems, which involve nonsmooth functions. Both necessary and sufficient first order conditions are presented for the case in which constraint is given just as a set. Finally, the optimality conditions based on these two approaches are compared. |

Aldinucci, Marco and Torquati, Massimo
We present HOC: a fast, scalable objects repository providing programmers with a general storage module. HOC may be used to implement DSMs as well as distributed cache subsystems. HOC is composed by a set of hot-pluggable cooperating processes that may sustain close to optimal network traffic rates. We designed an HOC-based Web cache that extends the Apache Web server and remarkably improves Apache farms performances with no modification to the Apache core code. |

Aldinucci, Marco and Coppola, Massimo and Danelutto, Marco and Vanneschi, Marco and Zoccolo, Corrado
ASSIST (A Software development System based upon Integrated Skeleton Technology) is a programming environment oriented to the development of parallel and distributed high-performance applications according to a unified approach. The language and implementation features of ASSIST are a result of our long-term research in parallel programming models and tools. ASSIST is evolving towards programming environments for high-performance complex enabling platforms, especially Grids. In this paper, we show how ASSIST can act as a valid research vehicle to study, experiment and realize Grid-aware programming environments for high-performance applications. Special emphasis is put on the innovative methodologies, strategies and tools for dynamically adaptive applications, that represent the necessary step for the success of Grid platforms. First we discuss the conceptual framework for Grid-aware programming environments, based upon structured parallel programming and components technology, anticipating how ASSIST possesses the essential features required by such framework. Then we summarize the ASSIST programming model, showing its evolution, along the line of structured parallel programming, to solve critical problems of expressive power, flexibility, interoperability and efficiency; some examples, both of kernels and of complex applications, are used to point out the ASSIST features. The modular compiler model and the current implementation for heterogeneous platforms and Globus-based Grids are illustrated. We show the features that allow ASSIST programs to be used in CORBA infrastructures, that represents our basic starting point towards interoperability in Grid applications. Finally, the presentation of all the previous issues is used to derive an ASSIST-based model for supporting dynamically adaptive applications. |

Baragatti, Alberto and Bruni, Roberto and Melgratti, Hernan and Montanari, Ugo
We present a prototype application for distributed agreements in multi-parties negotiations, where the number of participants is statically unknown, new participants can dynamically join ongoing negotiations and each participant knows only those parties with whom interacted. The application is based on asynchronous communication and it exploits the D2PC protocol for committing or aborting the negotiation. The software architecture consists of three modular layers, with the D2PC compiled in the bottom layer and completely transparent to final users. Our prototype is tailored to the running scenario of Ad-Hoc networks used by rescue teams for disaster recovery. |

Frangioni, Antonio
It is well-known that the Lagrangian dual of an Integer Linear Program (ILP) provides the same bound as a continuous relaxation involving the convex hull of all the optimal solutions of the Lagrangian relaxation. It is less often realized that this equivalence is \emph{effective}, in that basically all known algorithms for solving the Lagrangian dual either naturally compute an (approximate) optimal solution of the "convexified relaxation", or can be modified to do so. After recalling these results we elaborate on the importance of the availability of primal information produced by the Lagrangian dual within both exact and approximate approaches to the original (ILP), using three optimization problems with different structure to illustrate some of the main points. |

Luccio, Fabrizio and Mesa Enriquez, Antonio and Olivares Rieumont, P. and Pagli, Linda
A subtree P of a given tree T is a bottom-up subtree of T if, for each internal vertex u of P, all the children of u in T are also vertices of P. Our problem is deciding if a pattern tree P of m vertices is a bottom-up subtree of a text tree T of n vertices, m <= n, and, in the affirmative case, finding all the occurrences of P in T. If the vertices are labeled, the labels in the corresponding positions in P and T must match. We consider unordered rooted trees with labeled vertices, which are reconducted to simpler ordered trees through an ordering algorithm. Processing time depends on the label format. If the labels are single characters of a constant or of an n-integer alphabet \Sigma (respectively, |\Sigma| = constant, or \Sigma is composed of integers in the range [1 \div n]), the problem is solved in O(m +\log n) time and \Theta(m) additional space, after a preprocessing of T is done in \Theta(n) time and \Theta(n) additional space. The number of occurrences of P in T does not appear in the search time because all such occurrences can be directly reconstructed from a constant output information. For more complex labelsthe running times slightly increase. In particular if the labels are sequences of characters (e.g. as in XML files) the running time becomes a function of the total length of all the labels in T and P. Although directed to the "simple problem" of exact subtree matching, our work is the first one to solve it in linear time for unordered trees. Moreover our approach is the one of dictionary search. Regarding $T$ as a static text on which several queries are made, $P$ as the contents of one such a query, and assuming $m=o(n)$, the response time for each $P$ is sublinear in the size of the overall structure. |

Tulone, D.
We propose a distributed digital time--stamping service (TSS) based on absolute timestamps and resilient to Byzantine failures. In contrast with previous solutions based on linking schemes, our TSS provides \emph{high scalability}, \emph{better response time}, and \emph{fine granularity}. It works in asynchronous systems, |

Bodei, Chiara and Degano, Pierpaolo and Gao, Han and Brodo, Linda
A type flaw attack on a security protocol is an attack where an honest principal is cheated on interpreting a field in a message as the one with a type other than the intended one. In this paper, we shall present an extension of the LySa calculus with tags attached to each field, indicating the intended types. We developed a control flow analysis for analysing the extended LySa, which over-approximates all the possible behaviour of a protocol and hence is able to capture any type confusion that may happen during the protocol execution. The control flow analysis has been applied to a number of security protocols, either subject to type flaw attacks or not. The results show that it is able to capture type flaw attacks on those security protocols. |

Ghelli, Giorgio and Colazzo, Dario and Sartiani, Carlo
Inclusion between XML types is important but expensive, and is much more expensive when unordered types are considered. We prove here that inclusion for XML types with interleaving and counting can be decided in polynomial time in presence of two important restrictions: no element appears twice in the same content model, and Kleene star is only applied to disjunctions of single elements. Our approach is based on the transformation of each such type into a set of constraints that completely characterizes the type. We then provide a complete deduction system to verify whether the constraints of one type imply all the constraints of another one. |

Nicotra, Luca and Micheli, Alessio and Starita, Antonina
In this report we perform a comparative study of kernel functions defined on generative models with the goal to embed phylogenetic information into a discriminative learning approach. We describe three generative tree kernels: a sufficient statistics kernel, a Fisher kernel, and a probability product kernel; their key features are the adaptivity to the input domain and the ability to deal with structured data. In particular, kernel adaptivity is obtained through the estimation of the parameters of a tree structured model of evolution from an input domain of phylogenetic profiles encoding the presence or absence of specific proteins in a set of fully sequenced genomes. We report results obtained in the prediction of the functional class of the proteins of the yeast S. Cervisae together with comparisons with a standard vector based kernel and with a non-adaptive tree kernel function. To further analyze the impact of the discriminative learning phase, and to provide an assessment of the information retained by the learned generative models we apply them directly to classification through log-odds. Finally, the advantage achieved through adaptivity for two of the new kernels is assessed through a comparison with similar kernels based on randomly initialized generative models where no learning is performed, and to kernels where parameters are set only on the base of biological considerations. |

Pedreschi, Dino and Ruggieri, Salvatore and Turini, Franco
In the context of civil rights law, discrimination refers to unfair or unequal treatment of people based on membership to a category or a minority, without regard to individual merit. Rules extracted from databases by data mining techniques, such as classification or association rules, when used for decision tasks such as benefit or credit approval, can be discriminatory, in the above sense. This deficiency of classification and association rules poses ethical and legal issues, as well as obstacles to practical application. In this paper, the notion of discriminatory classification rules is introduced and studied. Examples of potentially discriminatory attributes include gender, race, job, and age. A measure, termed $\alpha$-protection, of the discrimination power of a classification rule containing a discriminatory item is defined and its properties studied. We show that the introduced notion is non-trivial, in the sense that discriminatory rules can be derived from apparently safe ones under natural assumptions about background knowledge. Finally, we discuss how to check $\alpha$-protection and provide an empirical assessment on the German credit dataset. |

Bruni, Roberto and Lluch Lafuente, Alberto and Montanari, Ugo and Tuosto, Emilio
We introduce Architectural Design Rewriting (ADR), an approach to deal with the design of reconfigurable software architectures. The key features we promote are: (i) rule-based approach (over graphs); (ii) hierarchical design; (iii) algebraic presentation; and (iv) inductively-defined reconfigurations. Architectures are suitably modeled by graphs whose edges and nodes respectively represent components and connection ports. Architectures are designed hierarchically by a set of edge replacement rules that fix the architectural style. Depending on their reading, productions allow: (i) top-down design by refinement, (ii) bottom-up typing of actual architectures, and (iii) well-formed composition of architectures. The key idea is to encode style proofs as terms and to exploit such information at run-time for guiding reconfigurations. The main advantages of ADR are that: (i) instead of reasoning on flat architectures, ADR specifications provide a convenient hierarchical structure, by exploiting the architectural classes introduced by the style, (ii) complex reconfiguration schemes can be defined inductively, and (iii) style-preservation is guaranteed. |

Aldinucci, Marco and Danelutto, Marco and Kilpatrick, Peter
Following earlier work demonstrating the utility of Orc as a means of specifying and reasoning about grid applications we propose the enhancement of such specifications with metadata that provide a means to extend an Orc specification with implementation oriented information. We argue that such specifications provide a useful refinement step in allowing reasoning about implementation related issues ahead of actual implementation or even prototyping. As examples, we demonstrate how such extended specifications can be used for investigating security related issues and for evaluating the cost of handling grid resource faults. The approach emphasises a semi-formal style of reasoning that makes maximum use of programmer domain knowledge and experience. |

Ferragina, Paolo and Nitto, Igor and Venturini, Rossano
Let G be an unweighted and undirected graph of n nodes, and let D be the n x n matrix storing the All-Pairs-Shortest-Path distances in G. Since D contains integers in [n], its plain storage takes n^2log (n + 1) bits. However, a simple counting argument shows that n^2/2 bits are necessary to store D. In this paper we investigate the question of finding a succinct representation of D that requires O(n^2) bits of storage and still supports constant-time access to each of its entries. This is asymptotically optimal in the worst case, and far from the information-theoretic lower-bound by a multiplicative factor log 3 \simeq 1.585. As a result O(1) bits per pairs of nodes in G are enough to retain constant-time access to their shortest-path distance. We achieve this result by reducing the storage of D to the succinct storage of labeled trees and ternary sequences, for which we properly adapt and orchestrate the use of known compressed data structures. |

Aldinucci, Marco and Campa, Sonia and Danelutto, Marco and Kilpatrick, Peter and Dazzi, Patrizio and Laforenza, Domenico and Tonellotto, Nicola
We present behavioural skeletons for the CoreGrid Component Model, which are an abstraction aimed at simplifying the development of GCM-based self-management applications. Behavioural skeletons abstract component self-man-agent in component-based design as design patterns abstract class design in classic OO development. As here we just want to introduce the behavioural skeleton framework, emphasis is placed on general skeleton structure rather than on their autonomic management policies. |

Aldinucci, Marco and Torquati, Massimo and Vanneschi, Marco and Alessandro, Gervaso and Zuccato, Pierfrancesco and cacitti, Manuel
VirtuaLinux is a Linux meta-distribution that allows the creation, deployment and administration of virtualized clusters with no single point of failure. VirtuaLinux architecture supports disk-less configurations and provides an efficient, iSCSI based abstraction of the SAN. Clusters running VirtuaLinux exhibit no master node to boost resilience and flexibity. |

Cappanera, Paola and Scutellà, Maria Grazia
Given a weighted directed network G, we consider the problem of computing k balanced paths from given source nodes to given destination nodes of G, i.e., k paths such that the difference in cost between the longest and the shortest path is minimized. Although not yet investigated by the OR scientific community, except for some preliminary theoretical results concerning the special case of acyclic networks, balanced path problems arise in several interesting applications, both in transportation and in telecommunication settings. For example, in load balancing of multiple Quality of Service (QoS) paths in the IP Telephony service, the problem of minimizing the end-to-end delay difference between the longest and the shortest path in multiple path management schemes has been suggested in the literature as a promising line of research. In this work the focus is on the computation of node-disjoint balanced paths in the general case, where the input graph G may have any structure. Starting from some algorithmic ideas proposed for acyclic networks, a general framework is firstly described, which is based on the color-coding method for computing simple paths. Then the general framework is specialized and a pool of algorithms is designed which includes both an exact approach as well as alternative heuristics. The algorithms have been tested on a large suite of instances generated from some benchmark telecommunication instances. The obtained computational results are very interesting: in some cases, the exact color-coding algorithm produces the optimal solution very rapidly; in the remaining cases, some of the proposed heuristics are able to generate high quality solutions in a very quick time. |

Baldan, Paolo and Bracciali, Andrea and Bruni, Roberto
We propose a general methodology for analysing the behaviour of open systems modelled as "coordinators" , i.e., open terms of suitable process calculi. A coordinator is understood as a process with holes or place-holders where other coordinators and components (i.e., closed terms) can be plugged in, thus influencing its behaviour. The operational semantics of coordinators is given by means of a symbolic transition system, where states are coordinators and transitions are labelled by spatial/modal formulae expressing the potential interaction that plugged components may enable. Behavioural equivalences for coordinators, like strong and weak bisimilarities, can be straightforwardly defined over such a transition system. Differently from other approaches based on universal closures, i.e., where two coordinators are considered equivalent when all their closed instances are equivalent, our semantics preserves the openness of the system during its evolution, thus allowing dynamic instantiation to be accounted for in the semantics. To further support the adequacy of the construction, we show that our symbolic equivalences provide correct approximations of their universally closed counterparts, coinciding with them over closed components. For process calculi in suitable formats, we show how tractable symbolic semantics can be defined constructively using unification. |

Bigi, Giancarlo and Frangioni, Antonio and Zhang, Qinghua
In this paper we propose a novel generalization of the canonical DC problem and we study the convergence of outer approximation (cutting planes) algorithms for its solution which use an “approximated” oracle for checking the global optimality conditions to the problem. Although the approximated optimality conditions are similar to those of the canonical DC problem, the new class of Canonical Reverse Polar (CRP) problems is shown to significantly differ from its special case. We also show that outer approximation approaches for DC problems need be substantially modified in order to cope with (CRP); interestingly, some outer approximation approaches for the latter cannot be applied to the formers, thus the more general problem allows for novel algorithms. We develop a hierarchy of conditions that guarantee the convergence of cutting plane algorithms;relying on these conditions, we build four cutting plane algorithms for solving (CRP), which seem to be new and cannot be reduced to each other. |

Bacciu, Davide and Micheli, Alessio and Starita, Antonina
The paper extends Competitive Repetition-suppression (CoRe) learning to deal with high dimensional data clustering. We show how CoRe can be applied to the automatic detection of the unknown cluster number and the simultaneous ranking of the features according to learned relevance factors. The effectiveness of the approach is tested on two freely available data sets from gene expression data and the results show that CoRe clustering is able to discover the true data partitioning in a completely unsupervised way, while it develops a feature ranking that is consistent with the state-of-the-art lists of gene relevance. |

Aldinucci, Marco and Danelutto, Marco and Kilpatrick, Peter
Formal tools can be used in a ''semi-formal'' way to support distributed program analysis and tuning. We show how ORC has been used to reverse engineer a skeleton based programming environment and to remove one of the skeleton system's recognized weak points. The semi-formal approach adopted allowed these steps to be performed in a programmer-friendly way. |

Bigi, Giancarlo and Frangioni, Antonio and Zhang, Qinghua
The paper discusses a general framework for outer approximation type algorithms for the canonical DC optimization problem. A thorough analysis of properties which guarantee convergence is carried out: different sets of general conditions are proposed and compared. They are exploited to build six different algorithms, which include the first cutting plane algorithm proposed by Tuy but also new ones. Approximate optimality conditions are introduced to guarantee the termination of the algorithms and the relationships with the global optimal value are discussed. |

Ciriani, Valentina and Cordone, Roberto
Multi-level logic synthesis yields much more compact expressions of a given Boolean function with respect to standard two-level sum of products (SOP) forms. On the other hand, minimizing an expression with more than two-levels can take a large time. In this paper we introduce a novel algebraic four-level expression, named k-EXOR-projected sum of products (kEP-SOP) form,whose synthesis can be performed in polynomial time with an approximation algorithm starting from a minimal SOP.Our experiments show that the resulting networks can be obtained in very short computational time and often exhibit a high quality. We also study the testability of these networks under the Stuck-at-fault model, and show how fully testable circuits can be generated from them by adding at most a constant number of multiplexer gates. Experimental results show the effectiveness of this method both for four-level logic and general multi-level logic synthesis. |

Bartoletti, Massimo and Degano, Pierpaolo and Ferrari, Gianluigi
A static approach is proposed to study secure composition of services. We extend the lambda-calculus with primitives for selecting and invoking services that respect given security requirements. Security-critical code is enclosed in policy framings with a possibly nested, local scope. Policy framings enforce safety and liveness properties. The actual run-time behaviour of services is over-approximated by a type and effect system. Types are standard, and effects include the actions with possible security concerns - as well as information about which services may be invoked at run-time. An approximation is model-checked to verify policy framings within their scopes. This allows for removing any run-time execution monitor, and for determining the plans driving the selection of those services that match the security requirements on demand. |

Colazzo, Dario and Sartiani, Carlo
While XML is an ordered data format, many applications outside the document processing area just drop ordering and manipulate XML data as they were unordered. In these contexts, hence, XML is essentially used as a way for representing unordered, unranked trees. The wide use of unordered XML data should be coupled with a careful and detailed analysis of their theoretical properties. One of the operations that is mostly affected by the presence of a global ordering relation is semantic subtype-checking, i.e., language inclusion. In an unordered context, inclusion has been proved to be inherently more complex than in the ordered case: in particular, subtype-checking for ordered single-type EDTDs is in PSPACE, while the same operation for single-type EDTDs with unordered types is in EXPSPACE (the same complexity result holds for unordered DTDs). Comparing two unordered XML types for inclusion, hence, is very expensive; as a consequence, it becomes very important to identify restrictions defining type classes for which inclusion is tractable or, at least, less complex. This paper identifies two large subclasses of unordered XML types for which inclusion can be computed by an EXPTIME and a PTIME algorithm, respectively. These classes are defined by restrictions on the use of element, repetition, and union types, and comprise many DTDs and XML Schemas used in practice. |

Albano, Michele and Chessa, Stefano and Nidito, Francesco and Pelagatti, Susanna
Storage of sensed data in wireless sensor networks is essential when the sink node is unavailable due to failure and/or disconnections, but it can also provide efficient access to sensed data to multiple sink nodes. Recent approaches to data storage rely on Geographic Hash Tables for efficient data storage and retrieval. These approaches however do not support different QoS levels for different classes of data as the programmer has no control on the level of redundancy of data (and thus on data dependability). Moreover, they result in a great unbalance in the storage usage in each sensor, even when sensors are uniformly distributed. This may cause serious data losses, waste energy and shorten the overall lifetime of the sensornet. In this paper, we propose a novel protocol, Q-NiGHT, which (1) provides a direct control on the level of QoS in the data dependability, and (2) uses a strategy similar to the rejection method to build a hash function which scatters data approximately with the same distribution as sensors. The benefits of QNiGHT are assessed through a detailed simulation experiment, also discussed in the paper. Results show its good performance on different sensors distributions on terms of both protocol costs and load balance between sensors. |

Pisanti, Nadia and Soldano, Henry and Carpentier, Mathilde and Pothier, Joel
In this paper we define a new class of problems that generalizes that of finding repeated motifs. The novelty lies in the addition of constraints on the motifs in terms of relations that must hold between pairs of positions of the motifs. We will hence denote them as "relational motifs". For this class of problems we give an algorithm that is a suitable extension of the KMR paradigm and, in particular, of the KMRC as it uses a degenerate alphabet. The algorithm contains several improvements with respect to KMRC that result especially useful when---as it is required for relational motifs---the inference is made by partially overlapping shorter motifs, rather than concatenating them like in KMR. The efficiency, correctness and completeness of the algorithm is assured by several non-trivial properties that we prove in this paper. Finally, we list some possible applications and we focus on one of them: the study of 3D structures of proteins. |

Bonchi, Filippo and Montanari, Ugo
In this report we study the connection between two well known models for interactive systems. Reactive Systems à la Leifer and Milner allow to derive an interactive semantics from a reduction semantics guaranteeing, under rather restrictive conditions, thecompositionality of the abstract semantics (bisimilarity). Universal Coalgebra provides a categorical framework where bisimilarity can be characterized as final semantics, i.e., as the unique morphism to the final coalgebra. Moreover, if lifting a coalgebra to a structured setting is possible, then bisimilarity is compositional with respect to the lifted structure. Here we show that for every reactive system we can build a coalgebra. Further, if bisimilarity is compositional in the reactive system, then we can lift this coalgebra to a structured coalgebra. |

Brogi, Antonio and Popescu, Razvan
The availability of several languages for the description of Web service behaviour (viz., protocols) hinders (automated) Web service aggregation, discovery, and adaptation as currently there are no tools for the (automated) translation of service protocols. In this paper we motivate the choice of YAWL as lingua-franca to express the behaviour of Web services. Furthermore, we describe a methodology for translating BPEL processes into YAWL workflows,thus paving the way for the formal analysis, aggregation, discovery, and adaptation of BPEL processes. Roughly, our methodology defines a YAWL pattern for each BPEL activity. The translation of a BPEL process reduces then to suitably instantiating and interconnecting the patterns of its activities. |

Brogi, Antonio and Popescu, Razvan
We present a methodology for the automated selection and aggregation of (Web) services with the purpose of satisfying client queries. A key ingredient of our approach is the notion of service contract, which consists of signature (WSDL), ontology information (OWL), and behaviour specification (YAWL). The methodology inputs a registry of service contracts and a client service contract, and it automatically generates aggregated contracts that fulfil the request. By trace inspection we first individuate candidate sets of contracts that could satisfy the query collectively. For each candidate set, we generate the contract of the aggregate by suitably building its control- and data-flow, and we verify whether it actually complies with the request. |

Aldinucci, Marco and Bertolli, Carlo and Campa, Sonia and Coppola, Massimo and Vanneschi, Marco and Veraldi, Luca and Zoccolo, Corrado
We present the concept of adaptive super-component as a hierarchy of components to which a well-known parallel behavior is associated. The proposal of a super-component feature is part of the experience we gained in the implementation of the ASSIST environment, which allows to develop self-configuring and optimising, component-based applications following a structured and hierarchical approach. We discuss how such approach to Grid programming influenced the design of the GCM component model. |

Bellia, Marco and Occhiuto, Maria Eugena
In this paper we investigate the possibility of adding higher order functionalities to Java, that is passing methods as parameters to other methods. The approach is based on a mechanism which offers a restricted, disciplined, form of function abstraction but suitable to the integration of high order and Object Oriented programming. We discuss how the expressive power of the language is increased. A new syntax is introduced for formal and actual parameters, hence the paper shows an implementation of the extentions based on a preprocessing technique tha maps programs of the extended language into programs of ordinary Java. |

Ferragina, Paolo and Venturini, Rossano
In this note we propose a storage scheme for a string S[1,n], drawn from an alphabet A, that requires space close to the k-th order empirical entropy of S and allows to retrieve any L-long substring of S in optimal O(1 + L log A/log n) time. This matches the best known bounds via the only use of binary encodings and tables. |

Bernasconi, Anna and Ciriani, Valentina and Cordone, Roberto
In this paper we introduce a new algebraic form for Boolean function representation, called \emph{EXOR-Projected Sum of Products} (\emph{EP-SOP}), resulting in a four level network that can be easily implemented in practice. We prove that deriving an optimal EP-SOP from an optimal SOP form is a hard problem ($NP^{NP}$-hard); nevertheless we propose a very efficient approximation algorithm, which returns in polynomial time an EP-SOP form whose cost is guaranteed to be near the optimum. Experimental evidence shows that for about $35\%$ of the classical synthesis benchmarks the EP-SOP networks have a smaller area and delay with respect to the optimal SOPs (sometimes gaining even $40$-$50\%$ of the area). Since the computational times required are extremely short, we recommend the use of the proposed approach as a post-processing step after SOP minimization. |

Peterlongo, Pierre and Pisanti, Nadia and Pereira do Lago, Alair and Sagot, Marie-France
Identifying local similarity between two or more sequences, or identifying repetitions occurring at least twice in a sequence, is an essential part in the analysis of biological sequences and of their phylogenetic relationship. Finding fragments that are conserved among several given sequences, or inside a unique sequence, while allowing for a certain number of insertions, deletions, and substitutions, is however known to be a computationally expensive task, and consequently exact methods can usually not be applied in practice. The filter we introduce in this paper, called Ed'Nimbus, provides a possible solution to this problem. It can be used as a preprocessing step to any multiple alignment method, eliminating an important fraction of the input that is guaranteed not to contain any approximate repetition. It consists in the verification of a strong necessary condition. This condition concerns the number and order of exactly repeated words shared by the approximate repetitions. The efficiency of the filter is due to this condition, that we show how to check in a fast way. The speed of the method is achieved thanks also to the use of a simple and efficient data structure, that we describe in this paper, as well as its linear time and space construction. Our results show that using Ed'Nimbus allows us to sensibly reduce the time spent in a local multiple alignment.The Ed'Nimbus software is freely available on the web:http://igm.univ-mlv.fr/$\sim$peterlon/officiel/ednimbus/ |

Bellia, Marco and Occhiuto, Maria Eugena
Higher order programming is considered a good methodology for program design and specification, furthermore it is fundamental for rapid prototyping. The paper is devoted to higher order programming in Java and, more in general, in the OO programming paradigm. It extends \cite{high2} and discusses introspection to support higher order programming and compares this technique with other different, interesting approaches, including function emulation and function integration. Finally, it addresses the problem of embedding, in the OO paradigm, suitable mechanisms for method passing and method extraction: It exemplifies the use in OO programming and discusses language definition techniques. |

Bonchi, Filippo and Gadducci, Fabio and Koenig, Barbara
The paper presents a case study on the synthesis of labelled transition systems (LTSs) for process calculi, choosing as testbed Milner's Calculus of Communicating System (CCS). The proposal is based on a graphical encoding: each CCS process is mapped into a graph equipped with suitable, such that the denotation is fully abstract with respect to the usual structural congruence. Graphs with interfaces are amenable to the synthesis mechanism based on borrowed contexts (BCs), proposed by Ehrig and Koenig (which are an instance of relative pushouts, originally introduced by Milner and Leifer). The BC mechanism allows the effective construction of an LTS that has graphs with interfaces as both states and labels, and such that the associated bisimilarity is automatically a congruence. Our paper focuses on the analysis of the LTS distilled by exploiting the encoding of CCS processes: besides offering some technical contributions towards the simplification of the BC mechanism, the key result of our work is the proof that the bisimilarity on processes obtained via BCs coincides with the standard strong bisimilarity for CCS. |

Brogi, Antonio and Corfini, Sara
A major challenge for Service-oriented Computing is how to discover and compose (Web) services to build complex applications. We present a matchmaking system that exploits both semantics and behavioural information to discover service compositions capable of satisfying a client request. |

Aldinucci, Marco and Danelutto, Marco
Skeletal systems exploit algorithmical skeletons technology to provide the user very high level, efficient parallel programming environments. They have been recently demonstrated to be suitable for highly distributed architectures, such as workstation clusters, networks and grids. However, when using skeletal system for grid programming care must be taken to secure data and code transfers across non-dedicated, non-secure network links. In this work we take into account the cost of security introduction in muskel, a full Java skeletal system exploiting macro data flow implementation technology. We consider the adoption of mechanisms that allow securing all the communications happening between remote, unreliable nodes and we evaluate the cost of such mechanisms. In particular, we consider the implications on the computational grains needed to scale secure and insecure skeletal computations. |

Vanneschi, Marco
In questo rapporto viene fatto il punto della situazione ed analizzate le tendenze scientifiche più interessanti nel campo delle piattaforme Grid. Queste vengono studiate nella loro generalità, dando enfasi agli aspetti più caratterizzanti in termini di dinamicità, adattività, eterogeneità e capacità di supportare paradigmi cooperativi scalabili. Come emerso chiaramente dagli ultimi progetti di ricerca cui il gruppo del Dipartimento ha partecipato (Grid.it, CoreGrid), è fondamentale la definizione e realizzazione di un insieme di strumenti ad alto livello per la progettazione di applicazioni e servizi, che rendano la piattaforma sostanzialmente invisibile al progettista di applicazioni (“Invisible Grid”). Gli obiettivi di tali strumenti sono il controllo della Qualità del Servizio (performance, affidabilità, sicurezza), da esprimere ad alto livello e da implementare a livello dinamico, e la capacità di supportare varie versioni del software applicativo, portabili attraverso l’evoluzione della tecnologia sottostante, e caratterizzate da prestazioni diverse. Sono discusse a fondo le convergenze tra settori di ricerca riguardanti parallel processing, knowledge discovery, security, intelligent networking. |

Bernasconi, Anna and Ciriani, Valentina
In this paper we characterize and study a new class of regular Boolean functions called D-reducible.A D-reducible function, depending on all its $n$ input variables, can be studied and synthesized in a space of dimension strictly smaller than $n$. A D-reducible function can be efficiently decomposed, giving rise to a new logic form, that we have called DRedSOP. This form is shown here to be generally smaller than the corresponding minimum SOP form. Experimental evidence shows that such functions are rather common and D-reducibility can be tested very quickly. |

Bernasconi, Anna and Ciriani, Valentina and Drechsler, Rolf and Villa, Tiziano
The paper presents a heuristic algorithm for the minimization of 2-SPP networks, i.e., three-level XOR-AND-OR forms with XOR gates restricted to fan-in 2. Previous works had presented exact algorithms for the minimization of unrestricted SPP networks and of 2-SPP networks. The exact minimization procedures were formulated as covering problems as in the minimization of SOP forms and had worst-case exponential complexity. Extending the expand-irredundant-reduce paradigm of ESPRESSO heuristic, we propose a minimization algorithm for 2-SPP networks that iterates local minimization and reshape of a solution until further improvement. We introduce also the notion of EXOR-irredundant to prove that OR-AND-EXOR irredundant networks are fully testable and guarantee that our algorithm yields OR-AND-EXOR irredundant solutions. We report a large set of experiments showing impressive high-quality results with affordable run times, handling also examples whose exact solutions could not be computed. |

Aldinucci, Marco and Danelutto, Marco and Paternesi, Andrea and Ravazzolo, Roberto and Vanneschi, Marco
The ASSIST environment provides a high-level programming toolkit for the grid. ASSIST applications are described by means of a coordination language, which can express arbitrary graphs of modules. These modules (or a graph of them) may be enclosed in components specifically designed for the grid (GRID.it components). In this paper we describe how ASSIST modules can be wired through standard Web Services, and how GRID.it components may be made available as standard Web Services. |

Frangioni, Antonio and Gentile, Claudio
Perspective cuts are a computationally effective family of valid inequalities, belonging to the general family of disjunctive cuts, for Mixed-Integer Convex NonLinear Programming problems with a specific structure. The required structure can be forced upon models that would not originally display it by decomposing the Hessian of the problem into the sum of two positive semidefinite matrices, a generic and a diagonal one, so that the latter is ''as large as possible''. We compare two ways for computing the diagonal matrix: an inexpensive approach requiring a minimum eigenvalue computation and a more costly procedure which require the solution of a SemiDefinite Programming problem. The latter dramatically outperforms the former at least upon instances of the Mean-Variance problem in portfolio optimization. |

Ciuffoletti, Augusto
In a large distributed system there is usually the need to keep a record of the state of the components: one example is the database of the resources available in a Grid. The design of such database should take into account that the size of the system is not constant, and may vary of orders of magnitude: manual intervention to reconfigure the system in order to meet accessibility requirements may be problematic,and a source of unreliability.We present a scheme that provides scalable accessibility to such a database, focussing on the performance of update operations. The scalable implementation of update operations enables ubiquitous replication of the database, thus bringing at reach the scalability of read operations. Each component of the system regulates automatically the relevant operational parameters with a kind of feedback, which is based on probabilistic considerations. Many aspect of the scheme are in fact probabilistic, and we introduce a simple model that describes the performance of the system. The overall approach as well as certain performance figures that cannot be forecast analitically, are explored with simulation. |

PIsanti, Nadia and Soldano, Henry and Carpentier, Mathilde and Pothier, Joel
Detecting repeated 3D protein substructures has become a new crucial frontier in motifs inference. In \cite{cpm} we have suggested a possible solution to this problem by means of a new framework in which the repeated pattern is required to be conserved also in terms of relations between its position pairs. In our application these relations are the distances between $\alpha$-carbons of amino acids in 3D proteins structures, thus leading to a \emph{structural consensus} as well. In this paper we motivate some complexity issues claimed (and assumed, but not proved) in \cite{cpm} concerning inclusion tests between occurrences of repeated motifs. These inclusion tests are performed during the motifs inference in \emph{KMRoverlapR} (presented in \cite{cpm}), but also within other motifs inference tools such as \emph{KMRC} (\cite{kmrc}). These involve alternative representations of motifs, for which we also prove here some interesting properties concerning pattern matching issues. We conclude this contribution with a few tests on cytochrome P450 protein structures. |

Del Corso, Gianna M. and Gullì, Antonio and Romani, Francesco
PageRank algorithm plays a very important role in search engine technology and consists in the computation of the eigenvector corresponding to the eigenvalue one of a matrix whose size is now in the billions. The problem incorporates a parameter $\alpha$ that determines the difficulty of the problem. In this paper, the effectiveness of stationary and non stationary methods are compared on some portion of real web matrices for different choices of $\alpha$. We see that stationary methods are very reliable and more competitive when the problem is well conditioned, that is for small values of $\alpha$. However, for large value of the parameter $\alpha$ the problem becomes more difficult and methods such as preconditioned BiCGStab or restarted preconditioned GMRES become competitive with stationary methods in terms of Mflops count as well as in number of iterations necessary to reach convergence. |

Aldinucci, Marco and Danelutto, Marco and Giaccherini, Gianni and Torquati, Massimo and Vanneschi, Marco
AdHoc (Adaptive Distributed Herd of Ob ject Caches) is a Grid-enabled, fast, scalable ob ject repository providing programmers with a general storage module. We present three different software tools based on AdHoc: A parallel cache for Apache, a DSM, a main-memory parallel file system. We also show that these tool exhibit a considerable performance and speedup both in absolute figures and w.r.t. other software tools exploiting the same features. |

Gorlatch, Sergei and Danelutto, Marco
The workshop is organised by the Network of Excellence CoreGRID funded by the European Commission under the sixth framework programme IST-2003-2.3.2.8 starting September 1st, 2004. CoreGRID aims at strengthening and advancing scientific and technological excellence in the area of Grid and Peer-to-Peer technologies. To achieve this objective, the network brings together a critical mass of well-established researchers (119 permanent researchers and 165 PhD students) from forty-two institutions who have constructed an ambitious joint programme of activities. The goal of the workshop is to promote the integration of the CoreGRID network and of the European research community in the area of Grid technologies, in order to overcome the current fragmentation and duplication of efforts in this area. The list of topics of Grid research covered at the workshop includes but is not limited to: knowledge & data management; programming models; system architecture; Grid information, resource and workflow monitoring services; resource management and scheduling;systems, tools and environments;trust and security issues on the Grid. |

Ciuffoletti, Augusto
This paper illustrates a technique to characterize one-way packet delays. Such technique does not depend on external clock synchronization facilities: instead, it computes the relative skew of the two clocks, and uses this information to characterize one-way packet delays. We focus on the applicability of such technique to a network monitoring environment, and perform an extensive test that includes second order clock frequency variations. We conclude that the approach is especially suited to estimate jitter, and other characteristics of one-way delay variation. It can be helpful also to evaluate second order parameters of one-way delay, like standard deviation, when second order variations of clock frequency, usually due to temperature variations, are compensated by hardware. This result makes one-way delay variations measurement widely accessible to Internet hosts, at a cost which is overall comparable with that of a ping. |

Buscemi, Marzia and Montanari, Ugo
We propose a compositional coalgebraic semantics of the Fusion calculus of Parrow and Victor in the version with explicit fusions by Gardner and Wischik. We follow a recent approach developed by the same authors and previously applied to the pi-calculus for lifting calculi with structural axioms to bialgebraic models. In our model, the unique morphism to the final bialgebra induces a bisimilarity relation which coincides with hyperequivalence and which is a congruence with respect to the operations. Interesting enough, the explicit fusion approach allows to exploit for the Fusion calculus essentially the same algebraic structure used for the pi-calculus. |

Ciriani, Valentina and Bernasconi, Anna and Drechsler, Rolf
Recently introduced, three-level logic Sum of Pseudoproducts (SPP) forms allow the representation of Boolean functions with much shorter expressions than standard two-level Sum of Products (SOP) forms, or other three-level logic forms. In this paper the testability of circuits derived from SPPs is analyzed. We study testability under static Fault Models (FMs),i.e., the Stuck-At Fault Model (SAFM) and the Cellular Fault Model(CFM). For SPP networks several minimal forms can be considered. While full testability can be proved in the SAFM for some forms, SPP networks in the CFM are shown to contain redundancies. Finally, we propose a method for transforming non-testable networks into testable ones. Experimental results are given to demonstrate the efficiency of the approach. |

Augusto, Ciuffoletti
In a distributed system where scalability is an issue, like in a GRID, the problem of enforcing mutual exclusion often arises in a soft form: the infrequent failure of the mutual exclusion predicate is tolerated, without compromising the consistent operation of the overall system. For instance this occurs when the operation subject to mutual exclusion requires massive use of a shared resource. We introduce a scalable soft mutual exclusion algorithm, based on token passing: one distinguished feature of our algorithm is that instead of introducing an overlay topology we adopt a random walk approach. The consistency of our proposal is evaluated by simulation, and we exemplify its use in the coordination of large data transfers in a backbone based network. |

Danelutto, Marco and Migliore, Castrenze and Pantaleo, Cosimino
ASSIST is a structured parallel programming environment targeting networks/clusters of workstations and grids. It introduced the parmod parallel construct, supporting a variety of parallelism exploitation patterns, including classical ones. The original implementation of parmod relies on static assignment of parallel activities to the processing elements at hand. In this work we discuss an alternative implementation of the parmod construct that implements completely dynamic assignment of parallel activities to the processing elements. We show that the new implementation introduces very limited overhead in case of regular computations, whereas it performs much better than the original one in case of irregular applications. The whole implementation of parmod is available as a C++/MPI library. |

Andreozzi, Sergio and Ciuffoletti, Augusto and Ghiselli, Antonia and Vistoli, Cristina
The availability of the outcome of network monitoring activities, while valuable for the operation of Grid applications, poses serious scalability problems: in principle, in a Grid composed of resources, we need to keep record of end-to-end paths. We introduce a scalable approach to network monitoring, that consists in partitioning the Grid into limiting monitoring activity to the measurement of domain-to-domain connectivity. Partitions must be consistent with network performance, since we expect that an observed network performance between domains is representative of the performance between the Grid Services included into such domains. We argue that partition design is a critical step: a consequence of an inconsistent partitioning is the production of invalid characteristics. The paper discusses such approach, also exploring its limits. We describe a fully functional prototype which is currently under test in the frame of the DATATAG project. |

Bevilacqua, Roberto and Bozzo, Enrico and Del Corso, Gianna M. and Fasino, Dario
We describe rank structures in generalized inverses of possibly rectangular banded matrices. In particular, we show that various kind of generalized inverses of rectangular banded matrices have submatrices whose rank depends on the bandwidth and on the nullity of the matrix. Moreover, we give an explicit representation formula for some generalized inverses of strictly banded matrices. |

Bruni, Roberto and Montanari, Ugo and Lanese, Ivan
The conceptual separation between computation and coordination in distributed computing systems motivates the use of peculiar entities commonly called connectors, whose task is managing the interaction among distributed components. Different kinds of connectors exist in the literature at different levels of abstraction. We focus on a basic algebra of connectors which is expressive enough to model e.g. all the architectural connectors of CommUnity. We first define the operational, observational and denotational semantics of connectors, then we show that the observational and denotational semantics coincide and finally we give a complete normal-form axiomatization. |

Gallo, Giorgio and Sodini, Claudia
Security is one particular global challenge that has recently come to the fore due to world events and societal changes. Within this context, one of the areas in which Operations Research can give an important contribution is the area of early warning and crisis prevention. This is a challenging research area requiring a multidisciplinary approach, which has enjoyed an increased interest in recent years. Here, we will highlight some of the problems arising in this area in which O.R. techniques can find useful applications. |

Scozzari, Francesca and Amato, Gianluca
In the context of abstract interpretation based static analysis, we cope with the problem of correctness and optimality for logic program analysis. We propose a new framework equipped with a denotational, goal-dependent semantics which refines many goal-driven frameworks appeared in the literature. The key point is the introduction of two specialized concrete operators for forward and backward unification. We prove that our goal-dependent semantics is correct w.r.t. computed answers and we provide the best correct approximations of all the operators involved in the semantics for set-sharing analysis. We show that the precision of the overall analysis is strictly improved and that, in some cases, we gain precision w.r.t. more complex domains involving linearity and freeness information. |

Menconi, Giulia and Marangoni, Roberto
To identify coding regions in genomic sequences represents the first step toward further analysis of the biological function carried on by the different functional elements in a genome. The present paper presents a novel method for the classification of coding and non-coding regions in prokaryotic genomes, based on a suitable defined compression index of a DNA sequence. The proposed approach has been applied on some prokaryotic complete genomes, obtaining optimal scores of correctly recognized coding and non-coding regions. Several false-positive and false-negative cases have been investigated in detail, discovering that this approach can fail in the presence of highly-structured coding regions (e.g., genes coding for modular proteins) or quasi-random non-coding regions (regions hosting non-functional fragments of copies of functional genes; regions hosting promoters or other protein-binding sequences, etc.). |

Danelutto, Marco and Vanneschi, Marco
Current GRID technology provides users/programmers with extended and comprehensive middleware tools covering all the basic aspects a programmer must deal with when writing GRID aware applications. Programmers of GRID aware applications use such middleware systems to gather the needed resources, stage code and data to the selected computing resources, schedule computations on them and so on. Overall, a huge programming effort is required in order to come to a working, efficient GRID aware application code. Here we propose a different approach. By recognizing that most GRID applications share a common parallel/distributed structure, we propose to use application managers that take care of all the details involved in the implementation of well known GRID aware application schemas. Such application managers use the functionalities of a RISC GRID core run time system (RGC). The RGC may be built on top of existing GRID middleware. Programmers of GRID aware applications, therefore, do not directly use GRID middleware. Rather, they structure the application using the available application managers. The application managers, in turn, invoke the basic GRID services provided by RGC to accomplish both functional and performance user requirements. |

Luccio, Fabrizio and Mesa Enriquez, Antonio and Pagli, Linda
The restricted rotation distance d_{R}(S,T) between two binary trees S, T of n vertices is the minimum number of rotations by which S can be transformed into T, where rotations can only take place at the root of the tree, or at the right child of the root. A sharp upper bound d_{R}(S,T) \leq 4n-8 is known, based on the word metric of Thompson's group. We refine this bound to a sharp d_{R}(S,T) \leq 4n-8-\rho_{S}-\rho_{T}, where \rho_{S} and \rho_{T} are the numbers of vertices in the rightmost vertex chains of the two trees, by means of a very simple transformation algorithm based on elementary properties of trees. We then generalize the concept of restricted rotation to k-restricted rotation, by allowing rotations to take place at all the vertices of the highest k levels of the tree. For k=2 we show that not much is gained in the worst case, although the classical problem of rebalancing an AVL tree can be solved efficiently, in particular rebalancing after vertex deletion requires O(log n) rotations as in the standard algorithm. Finding significant bounds and applications for k \geq 3 is open. |

Aldinucci, Marco and Petrocelli, Alessandro and Pistoletti, Edoardo and Torquati, Massimo and Vanneschi, Marco and Veraldi, Luca and Zoccolo, Corrado
Current Grid-aware applications are developed on top of lowlevel libraries by developers who are experts on Grid software implementation. Although some applications have been produced this way, this approach can hardly support the additional complexity to QoS control in real applications. We describe the ASSIST programming environment, showing that it is a suitable basis to capture and cope with many of the desired features for QoS control for the Grid. Grid applications, built as compositions of ASSIST modules, are supported by a hierarchical Application Manager targeting application QoS control. |

Flocchini, Paola and Pagli, Linda and Prencipe, Giuseppe and Santoro, Nicola and Widmayer, P. and Zuva, T.
Recently great attention has been given to {\em point-of-failure swap rerouting}, an efficient technique for routing in presence of transient failures. According to this technique, a message follows the normal routing table information unless the next hop has failed; in this case, it is redirected towards a precomputed link, called {\em swap}; once this link has been crossed, normal routing is resumed. The amount of precomputed information required in addition to the routing table is rather small: a single link per each destination. Several efficient serial algorithms have been presented to compute this information; none of them can unfortunately be efficiently implemented in a distributed environment. In this paper we present protocols, based on a new strategy, that allow the efficient computation of all the optimal swap edges under several optimization criteria. |

Pedrotti, Alberto
In the present study, we consider a computational model similar to the classical RAM machine, except that each instruction performed by the machine may fail with probability $q< 1/ 6e.$ The failures are transient and independent of each other. The definition of this model (which we call ERRAM) follows a previous work \cite{lp}, where we did not allow for addressing errors. We show how an arbitrary RAM program may be transformed into an ERRAM program, working with assigned probability $1- \delta,$ provided that we know the number $T$ of instructions that are executed by the original program, or an upper bound to it. One major tool that we use is redundancy, namely, each operation is repeated a certain number of times, and each stored value is kept in several copies. We derive upper bounds on the amount of redundancy that is needed to achieve the desired confidence. This allows us to bound both the space complexity and the expected time complexity of the resulting ERRAM program, in terms of $T$, $\delta$ and $q.$ |

Amato, Gianluca and Coppola, Massimo and Gnesi, Stefania and Scozzari, Francesca and Semini, Laura
We propose a formal method to validate the reliability of a web application, by modeling interactions among its constituent objects. Modeling exploits the recent "Multiple Levels of Integrity" mechanism which allows objects with dynamically changing reliability to cooperate within the application. The novelty of the method is the ability to describe systems where objects can modify their own integrity level, and react to such changes in other objects. The model is formalized with a process algebra, properties are expressed using the ACTL temporal logic, and can be verified by means of a model checker. Any instance of the above model inherits both the established properties and the proof techniques. To substantiate our proposal we consider several case-studies of web applications, showing how to express specific useful properties, and their validation schemata. Examples range from on-line travel agencies, inverted Turing test to detect malicious web-bots, to content cross-validation in peer to peer systems. |

Luccio, Fabrizio and Pagli, Linda
The restricted rotation distance d_{R}(S,T) between two trees S, T of n vertices is the minimum number of rotations by which S can be transformed into T, where rotations can only take place at the root of the tree, or at the right child of the root. A sharp upper bound d_{R}(S,T)\leq 4n-8 is known, based on the word metric of Thompson's group. We refine this bound to d_{R}(S,T)\leq 4n-8-\rho_{S} \rho_{T}, where \rho_{S} and \rho_{T} are the numbers of vertices in the rightmost vertex chains of the two trees, providing a very simple transformation algorithm based on elementary properties of trees. We also generalize the concept of restricted rotation by allowing rotations at the highest k levels of the tree, and study the new distance for k=2,3. |

Frangioni, Antonio and Gentile, Claudio
Interior Point algorithms for Min-Cost Flow problems have been shown to be competitive with more established "combinatorial" approaches, at least on some problem classes and for very large instances. A fundamental contribution to the efficiency of these approaches is provided by the specialized crossover routines that can be implemented for this problem; they allow early termination of the IP approach, sparing it with the "nasty" iterations where the core linear systems become very difficult to solve by iterative algorithms. Because crossover procedures are nothing but one step of a combinatorial approach to MCF, we propose to extend this concept to that of extended crossover: use the initial part of an Interior Point algorithm to MCF to warm-start a "combinatorial" algorithm. We report our experiments along this line using, as "combinatorial companion" , a primal-dual algorithm for MCF that has a natural way for exploiting the information provided by the IP approach. Our results show that the efficiency of the combined approach critically depends on the accurate selection of a set of parameters among very many possible ones, for which designing accurate guidelines appears not to be an easy task; however, they also show that the combined approach can be competitive with the two original approaches in isolation, at least on some "difficult" instances. |

Dobrev, Stefan and Flocchini, Paola and Prencipe, Giuseppe and Santoro, Nicola
Consider a networked environment, supporting mobile agents, where there is a black hole: a harmful host that disposes of visiting agents upon their arrival, leaving no observable trace of such a destruction. The black hole search problem is the one of assembling a team of asynchronous mobile agents, executing the same protocol and communicating by means of whiteboards, to successfully identify the location of the black hole; we are concerned with solutions that are generic (i.e., topology-independent). We establish tight bounds on the size of the team (i.e., the number of agents), and the cost (i.e., the number of moves) of a size-optimal solution protocol. These bounds depend on the a priori knowledge the agents have about the network, and on the consistency of the local labellings. In particular, we prove that: with topological ignorance $\Delta+1$ agents are needed and suffice, and the cost is $\Theta(n^2)$, where $\Delta$ is the maximal degree of a node and $n$ is the number of nodes in the network; with topological ignorance but in presence of sense of direction only $two$ agents suffice and the cost is $\Theta(n^2)$; and with complete topological knowledge only $two$ agents suffice and the cost is $\Theta(n \log n)$. All the upper-bound proofs are constructive. |

Geraci, Filippo and Grossi, Roberto
We consider the problem of fast IP address lookup in the forwarding engines of Internet routers. Many hardware and software solutions available in the literature solve a more general problem on strings, the longest prefix match. These solutions are then specialized on real IPv4/IPv6 addresses to work well on the specific IP lookup problem. We propose to go the other way around. We first analyze over 2400 public snapshots of routing tables collected over five years, discovering what we call the "middle-class effect" of those routes. We then exploit this effect for tailoring a simple solution to the IP lookup scheme, taking advantage of the skewed distribution of Internet addresses in routing tables. Our algorithmic solution is easy to implement in hardware or software as it is tantamount to performing an indirect memory access. Its performance can be bounded tightly in the worst case and has very low memory dependence (e.g., just one memory access to off-chip memory in the hardware implementation). It can quickly handle route announcements and withdrawals on the fly, with a small cost which scales well with the number of routes. Concurrent access is permitted during these updates. Our ideas may be helpful for attaining state-of-art link speed and may contribute to setting up a general framework for designing lookup methods by data analysis. |

Bevilacqua, Roberto and Bozzo, Enrico and Del Corso, Gianna M.
We study transformations by unitary similarity of a nonderogatory matrix to certain rank structured matrices. This kind of transformations can be described in a unified framework that involves Krylov matrices. The rank structures here addressed are preserved by QR iterations, and every iterate can be associated with a suitable Krylov matrix. |

Ferragina, Paolo and Gullì, Antonio
Search engines provide the view of the Web, and their smart ranking algorithms are their point of view. To offer the best view, personalized ranking algorithms are currently flourishing. They focus on the users rather than on their submitted queries, by taking into account some contextual/profiled information. In this paper we propose a personalized (meta-)search engine based on the web-snippet hierarchical clustering technology (a la Vivisimo) that is fully adaptive and non intrusive both for the user and for the queried search engine(s). It works on the top of 16 commodity search engines and fetches 200 (or more) results from them per user query. Our engine is able to mine on-the-fly the fine and variegate ''themes'' behind these results and then organize them in a hierarchy of folders that offers, at various levels of details, an up-to-date picture of these results. Users can therefore browse the hierarchy, select the themes that best match the ``intention'' behind their query, and ask our engine to personalize on-the-fly those query results according to their choices. In this way lazy users are not limited to look at first ten results, but immediately acquire several points of view on a larger pool (about 200) of them! We claim that it does exist a mutual reinforcement relationship between ranking and web-snippet clustering from which both of them may benefit. Our extensive experiments show that this form of personalization is very effective in informative queries, polysemous queries, and poor queries consisting of at most two terms (more than 80% of the Web queries are of this type!). In these cases, in fact, one theme might be so web-popular to unfortunately monopolize the top-ten results of link-based ranking algorithms. |

Tuosto, Emilio and Vieira, Hugo Torres
Spatiality is an important aspect of distributed systems because their computations depend both on the dynamic behaviour and on the structure of their components. Spatial logics have been proposed as the formal device for expressing spatial properties of systems. We define CCS, a CCS-like calculus whose semantics allows one to observe spatial aspects of systems on the top of which we define models of the spatial logic. Our alternative definition of models is proved equivalent to the standard one. Furthermore, logical equivalence is characterised in terms of the bisimilarity of CCS. |

Felicioli, C.
BpMatch is a new algorithm whose function is to efficiently calculate, from sequences S and T, the maximum coverage of T using only subsequences and complemented reversed subsequences of S, with minimum length l, possibly overlapped, and, in such a maximum coverage, to minimize the number of subsequences used. The problem is solved by executing a preelaboration of S (independently from the sequence on which a maximum coverage will be later looked for, and therefore executed only once for any target sequence T), generating a graph which allows a fast recognizing of S's subsequences. Graphs G and G' must be generated from S and complemented reversed S, then, using G, G' and T, calculus of the maximum coverage can be computed in linear space and time. |

Bernazzani, L. and Duce, C. and Micheli, Alessio and Mollica, V. and Sperduti, Alessandro and Starita, Antonina and Tinè, M.R.
The recursive neural networks deal with prediction tasks for compounds represented in a structured domain. These approaches allow combining, in a learning system, the flexibility and general advantages of a neural network model with the representational power of a structured domain. As a result a completely new approach to QSPR/QSAR analysis is obtained through the adaptive processing of molecular graphs whose performance is even better than that of traditional approaches. In this paper a Recursive Cascade Correlation model (RecCC) has been applied to the analysis of the Gibbs free energies of solvation in water of 179 monofunctional open chain organic compounds. An original representation of the molecules in terms of labeled directed positional acyclic graphs has been developed by individuating a limited set of constituent atomic groups and representation rules. The descriptive and predictive abilities of the RecCC model have been tested and compared with those of a traditional group contributions method. The inherent ability of the RecCC model to abstract chemical knowledge through the adaptive learning process have been investigated by Principal Components Analysis of the internal representations developed by the neural network, finding that the model recognizes the chemical compounds on the basis of a not trivial combination of their chemical structure and target property. |

Bevilacqua, Roberto and Bozzo, Enrico and Menchi, Ornella
In this paper the problem of reconstructing a two-dimensional tomographic medical image is considered. The emission function is defined on a plane circular domain and has to be reconstructed from data collected by SPECT. Through the natural pixel discretization approach the solution is expressed as a linear combination of functions belonging to a suitable basis. We consider here four different bases, all of them giving a highly structured coefficient matrix. By the Fourier transform the linear system thus obtained can be solved efficiently. The computational cost and the performance of the bases are compared. The numerical experimentation shows that when the data are contaminated by Poissonian noise all the bases are substantially equivalent from the point of view of the reconstruction efficiency. |

Bigi , Giancarlo and Passacantando, Mauro
The auxiliary problem principle allows solving a given equilibrium problem (EP) through an equivalent auxiliary problem with better properties. The paper investigates two families of auxiliary EPs: the classical auxiliary problems, in which a regularizing term is added to the equilibrium bifunction, and the regularized Minty EPs. The conditions that ensure the equivalence of a given EP with each of these auxiliary problems are investigated. This analysis leads to extending some known results for variational inequalities and linear EPs to the general case; moreover, new results are obtained as well. In particular, both new results on the existence and uniqueness of solutions and new error bounds based on gap functions with good convexity properties are obtained under weak quasimonotonicity or weak concavity assumptions. |

Brogi, Antonio and Popescu, Razvan and Tanca, Matteo
The aggregation methodology we propose in this paper automatically generates the service contract of a composite service from a set of contracts to be ggregated together with a data-flow mapping linking service parameters. Service contracts consist of (WSDL) signature, (OWL) ontology information, and (YAWL) behaviour specification. The aggregation process generates the workflow of the composite from the initial workflows by suitably adding control-flow constraints among their tasks due to data-flow dependencies among task parameters. After describing the whole methodology, we will also give an insight on our proof-of-concept Java prototype implementation of the aggregation process. |

Amato, Gianluca and Scozzari, Francesca
We face the problem of devising optimal unification operators for domains of abstract substitutions. In particular, we are interested in abstract domains for sharing and linerity properties, which are widely used in logic program analysis. We propose a new (infinite) domain ShLin which can be thought of as a general framework from which other domains can be easily derived by abstraction. The advantage is that abstract unification in ShLin is simple and elegant, and it is based on a new concept of sharing graph which plays the same role of alternating paths for pair sharing analysis. We also provide an alternative, purely algebraic description of sharing graphs. Starting from the results for ShLin, we derive optimal abstract operators for two well-known domains which combine sharing and linearity: a domain proposed by Andy King and the classic Sharing X Lin. |

Bevilacqua, Roberto and Bozzo, Enrico
In 1959 Edgar Asplund presented a geometric lemma that largely anticipated many results on the structure of inverses of band matrices. In this note we discuss an extension of Asplund's lemma that addresses the concept of generalized inverse,in particular the Moore-Penrose inverse. |

Bevilacqua, Roberto and Bozzo, Enrico
The Sylvester-Kac matrix is a tridiagonal matrix with integer entries and integer eigenvalues that appears in a variety of applicative problems. We show that it belongs to a four dimensional linear space of tridiagonal matrices that can be simultaneously reduced to triangular form. We name this space after the matrix. |

Fantini, Paola and Montangero, Carlo and Palasciano, Claudio and Reiff-Marganiec, Stephan and Semini, Laura
The integration of BPM and SOA has the potential to lead to increased agility, lower development and maintenance costs and a better alignment between business and IT. However, it still requires large efforts by highly skilled personnel. The Service-Targeted Policy-Oriented WorkfLow Approach attacks this problem by integrating workflows and services with policies to clearly distinguish between the core process description and the variations. This paper presents a first assessment of the impact of the approach on the design of business processes. To this end, we exploit a case study, namely the activation of VoIP services by a re-seller of telecommunication services. |

Cherubini, Davide and Fanni, Alessandra and Frangioni, Antonio and Murgia, Cristian and Scutellà, Maria Grazia and Zuddas, Paola
The paper concers the problem of minimizing the maximum link utilization of IP telecommunication networks under the joint use of the traditional IS-IS/OPSF protocol and the more sophisticated MPLS-TE technology. Both working conditions and single link failure scenarios are addressed. An innovative Linear Programming mathematical model is proposed that, while optimizing the network utilization, provides optimal user performance, efficient use of network resources, and 100\% survivability in case of single link failure. The hybrid approach takes advantage of both IGP and MPLS-TE technologies providing a flexible tool for IP networks traffic engineers. The proposed model is validated by a wide experimentation performed on synthetic and real networks, both in working and failure conditions. The obtained results show that the new approach considerably reduces the maximum utilization of the network, thereby allowing a more efficient use of the network resources; setting a limited number of LSPs yields better results than those obtained by simply optimizing the IGP weights. In addition, an optimized set of IGP weigths allows to further improve the network performance in case of failures. The computational time needed to solve the LP model is very limited and it well matches the real time requirements. |

Vandebril, Raf and Del Corso, Gianna M.
In this paper a new framework for transforming arbitrary matrices to compressed representations is presented. The framework provides a generic way of transforming a matrix via unitary similarity transformations to e.g.\ Hessenberg, Hessenberg\-/like and combinations of both. The new algorithms are deduced, based on the $QR$-factorization of the original matrix. Based on manipulations with Givens transformations, all the algorithms consists of eliminating the correct set of Givens transformations, resulting in a matrix obeying the desired structural constraints. Based on this new reduction procedure we investigate further correspondences such as irreducibility, unicity of the reduction procedure and the link with (rational) Krylov methods. The unitary similarity transform to Hessenberg\-/like form as presented here, differs<br />significantly from the one presented in earlier work. Not only does it use less Givens transformations to obtain the desired structure, also the convergence to rational Ritz values is not observed in the standard way. |

Bevilacqua, Roberto and Bozzo, Enrico and Del Corso, Gianna M.
In the last few years many numerical techniques for computing eigenvalues of structured rank matrices have been proposed. Most of them are<br />based on $QR$ iterations since, in the symmetric case, the rank structure is preserved and high accuracy is guaranteed. In the unsymmetric<br />case, however, the $QR$ algorithm destroys the rank structure, which is instead preserved if $LR$ iterations are used. We consider a wide<br />class of quasiseparable matrices which can be represented in terms of the same parameters involved in their Neville factorization. This<br />class, if assumptions are made to prevent possible breakdowns, is closed under $LR$ steps. Moreover, we propose an implicit shifted $LR$<br />method with a linear cost per step, which resembles the qd method for tridiagonal matrices. We show that for totally nonnegative<br />quasiseparable matrices the algorithm is stable and breakdowns cannot occur, if the Laguerre shift, or other shift strategy preserving<br />nonnegativity, is used. Computational evidence shows that good accuracy is obtained also when applied to symmetric positive definite<br />matrices.<br /> |

Aldinucci, Marco and Danelutto, Marco and Kilpatrick, Peter and Xhagjika, Vamis
We describe a lightweight prototype framework designed for experimentation with behavioural skeletons. A behavioural skeleton is a component implementing a well-known parallelism exploitation pattern and a rule-based autonomic manager taking care of some non-functional feature related to the parallel computation. Our prototype supports multiple autonomic managers within the same behavioural skeleton, each taking care of a different functional concern. The different managers in the behavioural skeleton coordinate themselves in such a way that a global, user-provided SLA can be satisfied. We discuss experiments that validate the manager coordination protocol, and the overall prototype functionality. The prototype is built on top of plain Java and employs JBoss rules for management. We present experimental results that demonstrate the operation of our prototype and allow overheads to be evaluated. |

Bacciu, Davide and Micheli, Alessio and Sperduti, Alessandro
Hidden Tree Markov Models describe probability distributions over tree-structured data by defining a top-down generative process from the root to the leaves of the tree. We provide a novel compositional hidden tree Markov model that inverts the generative process, allowing hidden states to better correlate and model the co-occurrence of substructures among the child subtrees of internal nodes. To this end, we introduce a mixed memory approximation that factorizes the joint children-to-parent state transition matrix as a mixture of pairwise transitions. This Technical Report provides and in-depth introduction to the Bottom-Up Hidden Tree Markov Model, including the details of the learning and inference procedures. |

Cioni, Lorenzo
This Technical Report, written in Italian, represents a short introduction to some basic concepts of System Dynamics. The focus is mainly on Causal Loop diagrams and on Flow diagrams and many examples are given of construction of such diagrams. It also contains a discussion of how differential equation of order higher than the first order can be represented and numerically solved by using Vensim models.<br />Reports of errors and inaccuracies of any kind are appreciated and really welcome.<br /> |

Aldinucci, Marco and Danelutto, Marco and Kilpatrick, Peter and Meneghin, Massimiliano and Torquati, Massimo
<p class="MsoPlainText" style="MARGIN: 0cm 0cm 0pt"><font size="3"><font face="Calibri">FastFlow is a programming environment specifically targeting cache-coherent shared-memory multi-cores. FastFlow is implemented as a stack of C++ template libraries built on top of lock-free (fence-free) synchronization mechanisms. In this paper we present a further evolution of FastFlow enabling programmers to offload part of their workload on a dynamically created software accelerator running on unused CPUs. The offloaded function can be easily derived from pre-existing sequential code. We emphasize in particular the effective trade-off between human productivity and execution efficiency of the approach.</font></font></p> |

Recchia, Raffaella and Scutellà, Maria Grazia
Aim of this paper is to compare alternative forms of robustness in the context of portfolio asset allocation. Starting with the concept of convex risk measures, a new family of models, called models, is firstly proposed where not only the values of the uncertainty parameters, but also their degree of feasibility are specified. This relaxed form of robustness is obtained by exploiting the link between convex risk measures and classical robustness. Then, we test some norm-portfolio models, as well as various robust strategies from the literature, with real market data on three different data sets. The objective of the computational study is to compare alternative forms of relaxed robustness - the relaxed robustness characterizing the norm portfolio models, the so-called soft robustness and the CVaR robustness. In addition, the models above are compared to a more classical robust model from the literature, in order to experiment similarities and dissimilarities between robust models based on convex risk measures and more traditional robust approaches. To the best of our knowledge, this is the first attempt at comparing robust strategies of different kinds in the framework of portfolio asset allocation. |

Scutellà, Maria Grazia
A generalization of the robust network design problem with oblivious routing is investigated in (Scutella', 2009), where the (uncertain) demands are served through two alternative routing templates. As indicated in that paper, it is an open issue as to whether the proposed problem, called (2-RND), is polynomially solvable or is NP-Hard. In this note we solve the issue by proving that (2-RND), as well as some generalizations, are NP-Hard. The hardness result holds true also when some routing templates are given as input data. This strengthens the results in (Scutella', 2009), where special (2-RND) cases are devised which are tractable from a computational perspective. |

Vandebril, Raf and Del Corso, Gianna M.
Hermitian plus possibly non-Hermitian low rank matrices can be efficiently reduced into Hessenberg form. The resulting Hessenberg matrix can still be written as the sum of a Hermitian plus low rank matrix. <br /><br />In this paper we develop a new implicit multishift $QR$-algorithm for Hessenberg matrices, which are the sum of a Hermitian plus a possibly non-Hermitian low rank correction.<br /><br />The proposed algorithm exploits both the symmetry and low rank structure to obtain a $QR$-step involving only $\mO{n}$ floating point operations instead of the standard $\mO{n^2}$ operations needed for performing a $QR$-step on a Hessenberg matrix. The algorithm is based on a suitable<br />$\mO{n}$ representation of the Hessenberg matrix. The low rank parts present in both the Hermitian and low rank part of the sum are compactly stored by a sequence of Givens transformations and few vectors.<br /><br />Due to the new representation, we cannot apply classical deflation techniques for Hessenberg matrices. A new, efficient technique is developed to overcome this problem.<br /><br />Some numerical experiments based on matrices arising in applications are performed.<br />The experiments illustrate effectiveness and accuracy of both the $QR$-algorithm and the newly developed deflation technique. <br /> |

Frangioni, Antonio and Pascali, Fausto and Scutellà, Maria Grazia
This paper addresses special cases of the robust network design problem under the single-source Hose model. We show that, in the case of unitary bounds, the static and the dynamic routing approaches lead to the same optimal solution, and this is true for both the splittable and the unspittable scenarios. As a consequence, in such a special case, the robust network design problem with (splittable) dynamic routing is polynomially solvable, whereas the problem is coNP-Hard under the general single-source Hose model. The results are based on the fact that the single-source Hose polyhedron with unitary bounds is dominated by a polynomial number of demand vectors. A feasible static routing can then be constructed as a convex combination of a set of routing templates which are feasible for the dominant demand vectors. The equivalence between static and dynamic routing is a consequence of those results, and it can also be generalized to some single-source Hose cases with non unitary bounds. |

Ciancia, Vincenzo and Kurz, Alexander and Montanari, Ugo
Calculi that feature resource-allocating constructs (e.g. the pi-calculus or the fusion calculus) require special kinds of models. The best-known ones are presheaves and nominal sets. But named sets have the advantage of being finite in a wide range of cases where the other two are in finite. The three models are equivalent. Finiteness of named sets is strictly related to the notion of finite support in nominal sets and the corresponding presheaves. We generalise previous equivalence results by introducing a notion of minimal support in presheaf categories indexed over small categories of monos. We show that nominal sets are generalisd by families, that is, free coproduct completions, indexed by symmetries. |

Luccio, Fabrizio and Mesa Enriquez, Antonio and Pagli, Linda
The rotation distance d(S,T) between two binary trees S, T of n vertices is the minimum number of rotations to transform S into T. While it is known that d(S,T) <= 2n-6, a well known conjecture states that there are trees for which this bound is sharp for any value of n >= 11. We are unable to prove the conjecture, but we give here some simple criteria for lower bound evaluation, leading for example to individuate some ``regular'' tree structures for which d(S,T) = 3n/2-O(1), or d(S,T) = 5n/3-O(1). |

Dell'Acqua, Pietro and Frangioni, Antonio and Serra Capizzano, Stefano
We consider multi-iterative techniques of multigrid type for the numerical solution of large linear systems with (weighted) structure of graph. We combine proper coarser-grid operators with auxiliary techniques. We show that the most effective smoothers have to be of Krylov type with spanning tree preconditioners, while the projectors have to be designed for maintaining as much as possible the structure of graph matrix at the inner levels. Some necessary and sufficient conditions are proved; in this framework it is possible to explain why the classical projectors inherited from differential equations are good in the differential context and why they behave unsatisfactorily for unstructured graphs. Several numerical experiments have been conducted showing that our approach is effective even in very difficult cases where the known approaches are rather slow. |

Bertolli, Carlo and Buono, Daniele and Mencagli, Gabriele and Vanneschi, Marco
Several complex and time-critical applications require the existence of novel distributed, heterogeneous and dynamic platforms composed of a variety of fixed and mobile processing nodes and networks. Such platforms, that can be called Pervasive Mobile Grids, aim to merge the features of Pervasive Computing and High-performance Grid Computing onto a new emerging paradigm. In this Chapter we study a methodology for designing high-performance distributed computations, able to exploit the heterogeneity and dynamicity of Pervasive Grids, by expressing Adaptivity and Context Awareness directly at the application level. We describe a programming model approach, and we compare it with other existing research works in the field of Pervasive Mobile Computing, discussing the rationales of the requirements and the features of a novel programming model for the target platforms and applications. In order to exemplify the proposed methodology we introduce our evaluation framework ASSISTANT, and we provide some interesting future directions in this research field. |

Bacciu, Davide and Buscemi, Maria Grazia and Mkrtchyan, Lusine
Service composition concerns both integration of heterogeneous distributed applications and dynamic selection of services. QoS-aware selection enables a service requester with certain QoS requirements to classify services according to their QoS guarantees. In this paper we present a method that allows for a fuzzy-valued description of QoS parameters. Fuzzy sets are suited to specify both the QoS preferences raised by a service requester such as `response time must be as lower as possible and cannot be more that 1000ms' and approximate estimates a provider can make on the QoS capabilities of its services like `availability is roughly between 95% and 99%'. We propose a matchmaking procedure based on a fuzzy-valued similarity measure that, given the specifications of QoS parameters of the requester and the providers, selects the most appropriate service among several functionally-equivalent ones, using a fuzzy ordering relation. We also devise a method for dynamical update of service offers by means of runtime monitoring of the actual QoS performance. |

Gallicchio, Claudio and Micheli, Alessio
Echo State Networks (ESNs) represent an emerging paradigm for modeling Recurrent Neural Networks (RNNs).In this report we try to identify and investigate some of the main aspects that can be accounted for the success and limitations of this class of models.Independently of the architectural design, we first show the effect on ESNs behavior due to the contractivity of the state transition function and the related Markovian bias.The purpose of our study is also to give an insight on how and why a larger reservoir may improve the predictive performance. We identify four key factors which can influence the performance of ESNs: input variability, multiple time-scales dynamics, non-linear interactions among units and regression in a high dimensional state space. Several variants of the basic ESN model are introduced in order to study these main factors. The proposed variants are tested on four datasets: the Mackey-Glass chaotic time series, the 10th order NARMA system, and two predictive tasks on a symbolic sequence domain with Markovian/anti-Markovian flavor. Experimental evidence shows that all the key identified factors have a major role in determining ESN performances. |

Frangioni, Antonio and Perez Sanchez, Luis
Complex, hierarchical, multi-scale industrial and natural systems generate increasingly large mathematical models. I-DARE is a structure-aware modeling-reformulating-solving environment based on Declarative Programming, that allows the construction of complex structured models. The main aim of the system is to produce models that can be automatically and algorithmically reformulated to search for the ''best'' formulation, intended as the one for which the most efficient solution approach is available. This requires exploration of a high-dimensional space comprising all (structured) reformulations of a given instance, all available solvers for (each part of) the formulation, and all possible configurations of the relevant algorithmic parameters for each solver. A fundamental pre-requisite for this exploration is the ability to predict the efficiency of a given (set of) algorithm(s), considering their configuration(s), for a given instance; this is, however, a vastly nontrivial task. This article describes how the I-DARE system organizes the information on the instance at hand in order to make the search in the (formulation, solver, configuration) space possible with several different exploration techniques. In particular, we propose a way to combine general machine learning mechanisms and ad-hoc methods, where available, in order to effectively compute the "objective function" of the search, i.e., the prediction of the effectiveness of a point in the space. We also discuss how this mechanism can take upon itself part of the exploration, the one in the sub-space of configurations, thus simplifying the task to the rest of the system by reducing the dimensionality of the search space it has to traverse. |

Frangioni, Antonio and Perez Sanchez, Luis
Complex, hierarchical, multi-scale industrial and natural systems generate increasingly large mathematical models. Practitioners are usually able to formulate such models in their ''natural'' form; however, solving them often requires finding an appropriate reformulation to reveal structures in the model which make it possible to apply efficient, specialized approaches. I-DARE is a structure-aware modeling-reformulation-solving environment based on Declarative Programming. It allows the construction of complex structured models, that can be automatically and algorithmically reformulated to search for the best formulation, intended as the one for which the most efficient solution approach is available. In order to accommodate (potentially) all possible specialized solution methods, it defines a general software framework for solvers, that are ''registered'' to specific problem structures. This article describes in details the application of Artificial Intelligence in the modeling and reformulation modules of I-DARE, showing how Declarative Programming can be used to design a structure-aware modeling environment that allows for a new automatic reformulation methodology. |

Meneghin, Massimiliano and Vanneschi, Marco
In stencil based parallel applications, communications represent the main overhead, especially when targeting a fine grain parallelization in order to reduce the completion time. Techniques that minimize the number and the impact of communications are clearly relevant. In literature the best optimization reduces the number of communications per step from <span style="font-style: italic;">3</span><sup style="font-style: italic;">dim</sup>, featured by a naive implementation, to <span style="font-style: italic;">2*dim</span>, where <span style="font-style: italic;">dim</span> is the number of the domain dimensions. To break down the previous bound, in the paper we introduce and formally prove <span style="font-style: italic;">Q-transformations</span>, for stencils featuring data dependencies that can be expressed as geometric affine translations. <span style="font-style: italic;">Q-transformations</span>, based on data dependencies orientations though space translations, lowers the number of communications per step to <span style="font-style: italic;">dim</span>. |

Baglioni, Miriam and Ratti, Elena and Turini, Franco
A tool for extracting sequential patterns with temporal and content constraints from logs is presented. The heart of the tool is a new algorithm capable of handling temporal constraints and itemsets with multiple occurrences of the same item. Applications of the system are discussed in the context of monitoring logs registered by a platform supporting distributed processes, developed within the European project BRITE. |

Vandebril, Raf and Del Corso, Gianna M.
Hermitian plus possibly unhermitian low rank matrices can be efficiently reduced into Hessenberg form. The resulting Hessenberg matrix can still be written as the sum of a Hermitian plus low rank matrix. In this paper we develop a new implicit multishift QR-algorithm for Hessenberg matrices, which are the sum of a Hermitian plus a possibly non-Hermitian low rank correction.The proposed algorithm exploits both the symmetry and low rank structure to obtain a QR-step involving only O(n) floating point operations instead of the standard O(n^2) operations needed for performing a QR-step on a Hessenberg matrix. The algorithm is based on a suitable O(n) representation of the Hessenberg matrix. The low rank parts present in both the Hermitian and low rank part of the sum are compactly stored by a sequence of Givens transformations and few vectors. Due to the new representation, we cannot apply classical deflation techniques for Hessenberg matrices. A new, efficient technique is developed to overcome this problem. Some numerical experiments based on matrices arising in applications are performed.The experiments illustrate effectiveness and accuracy of both the $QR$-algorithm and the newly developed deflation technique. |

Recchia, Raffaella and Scutellà, Maria Grazia
Many financial optimization problems involve future values of security prices, interest rates and exchange rates which are not known in advance, but can only be forecasted or estimated. Such problems fit perfectly into the framework of Robust Optimization that, given optimization problems with uncertain parameters, looks for solutions that will achieve good objective function values for the realization of these parameters in given uncertainty sets.In finance, Robust Optimization offers vehicles to incorporate the estimation of uncertain parameters into the decision making process.This is true, for example, in portfolio asset allocation. Starting from the robust counterparts of the classical mean-variance portfolio problems, in this paper we review some mathematical models that have been recently proposed in the literature to address uncertainty in portfolio asset allocation problems. For some of these, we focus also on algorithmic approaches and computataional issues. Finally, we analyze the relationship between robustness and risk measures. |

Luccio, Fabrizio
Si presentano alcuni semplici teoremi per risolvere le relazioni di ricorrenza lineari, bilanciate e di ordine costante, che rappresentano il tempo di funzionamento dei piu' comuni algoritmi ricorsivi. |

Cioni, Lorenzo
Electoral systems are complex entities composed of a set of phases that form a process to which performance parameters can be associated. One of the key points of every electoral system is represented by the electoral formula that can be characterized by a wide spectrum of properties that, according to Arrow's Impossibility Theorem and other theoretical results, cannot be all satisfied at the same time. Starting from these basic results the aim of this paper is to examine such properties within a hierarchical framework, based on {Analytic Hierarchy Process} proposed by T. L. Saaty, performing pairwise comparisons at various levels of a hierarchy so to get a global ranking of such properties. Since any real electoral system is known to satisfy some of such properties but not others it should be possible, in this way, to get a ranking of the electoral systems according also to the political goals both of the voters and the candidates. In this way it should be possible to estimate the relative importance of each property with respect to the final ranking of every electoral formula. |

Bonchi, Filippo and Montanari, Ugo
The operational semantics of interactive systems is usually decsribed by labeled transition systems. Abstract semantics is defined in terms of bisimilarity, that in the finite case, can be computed via the well-known partition refinement algorithm. However, the behavoiur of interactive systems is in many case infinite and thus checking bisimilarity in this way is unfeasible. Symbolic semantics allows to define smaller, possibly finite, transition systems, by employing symbolic actions and avoiding some source of infiniteness. Unfortunately, the standard partition refinement algorithm does not work with symbolic bisimilarity. In a previous paper, we have introduced an abstract theory of symbolic semantics. In this paper, we introduce a partition refinement algorithm for symbolic bisimilarity. |

Occhiuto, Maria Eugena and Bellia, Marco
In \cite{bellia2008} an extension of Java is described which allows methods to have other methods as parameters and a meaning preserving transformation is defined, which maps programs in the extended language into ordinary Java programs. In this paper we present an implementation for such extended language, based on the transformation defined. We also discuss the integration of the programs in the extended language with ordinary first order programs, and hence Java API . Eventually the language is extended with mc\_parameters for which an implementation with callbacks \cite{horstmann2007} is shown. |

Cioni, Lorenzo
The present Technical Report contains a concise analysis of some participatory methods and a discussion of the bold;method of consensusas a formal tool for decisionmaking.The treatment is kept at an informal and colloquial level with the aim to be precise and concise.Thei Technical Report relies mainly on classical results, so that it may be considered as a real primer on these topics, but contains also novel materials and results.Reports of errors and inaccuracies are greatefully appreciated. |

Aldinucci, Marco and Ruggieri, Salvatore and Torquati, Massimo
The whole computer hardware industry embraced multicores. For these machines, the extreme optimisation of sequential algorithms is no longer sufficient to squeeze the real machine power, which can be only exploited via thread-level parallelism. Decision tree algorithms exhibit natural concurrency that makes them suitable to be parallelised. This paper presents an approach for easy-yet-efficient porting of an implementation of the C4.5 algorithm on multicores. The parallel porting requires minimal changes to the original sequential code, and it is able to exploit up to 7X speedup on an Intel dual-quad core machine. |

Cioni, Lorenzo
In this technical report we describe a certain number of models of auctions barters and propose the associated procedures. In this way we face the problem of the fair sharing of goods, bads and possibly services (also collectively termed as) among a set of players that cannot or do not want to use a common cardinal scale for the evaluation of the items owing to the very qualitative and non economical nature of the items themselves. |

Menconi, Giulia and Felicioli, Claudio and Marangoni, Roberto
Here we report a preliminary analysis of transposons Long Terminal Repeats (LTR) within several yeast strains. |

Cardillo, Franco Alberto and Masulli, Francesco
Breast cancer represents a real emergency in many European and North American countries. In order to detect the diseases as soon as it starts developing, many countries have organized screening programs aimed to women monitoring. Traditional imaging modalities, like X-Ray mammography, do not provide certain and reliable results on young women or in women who underwent surgical interventions. Three-dimensional breast MRI has proven to be a valuable tool for disambiguate uncertain mammographic findings and for the pre-operative planning. However, in order for MRI of the breast to be effective, a contrast agent, highlighting areas with a high degree of vascularization, needs to be used. The full breast volume needs to be acquired once before and several times after the contrast agent injection. A contrast-enhance MR examination of the breast contains hundreds of images that must be inspected carefully. Automatic approaches to breast cancer detection can help radiologists in this hard tasks and speed up the inspection process. In this paper we present a survey of automatic approaches to breast cancer detection in MRI. We discuss both registration algorithms needed to align corresponding images acquired at different instants in time and classification algorithms able to label a suspicious area according to features automatically computed from the images. |

Ferrari, Gianluigi
We propose a methodology for statically predicting the possible interaction patterns of services within a given choreography.We focus on choreographies exploiting the event notification paradigm to manage service interactions. Control Flow Analysis techniques statically approximate which events can be delivered to match the choreography constraints and how the multicast groups can be optimised to handle event notification within the service choreography. |

Aldinucci, Marco and Meneghin, Massimiliano and Torquati, Massimo
Shared memory multiprocessors come back to popularity thanks to rapid spreading of commodity multi-core architectures. As ever, shared memory programs are fairly easy to write and quite hard to optimise; providing multi-core programmers with optimising tools and programming frameworks is a nowadays challenge. Few efforts have been done to support effective streaming applications on these architectures. In this paper we introduce FastFlow, a low-level programming framework based on lock-free queues explicitly designed to support high-level languages for streaming applications. We compare FastFlow with state-of-the-art programming frameworks such as Cilk, OpenMP, and Intel TBB. We experimentally demonstrate that FastFlow is always more efficient than all of them in a set of micro-benchmarks and on a real world application; the speedup edge of FastFlow over other solutions might be bold for fine grain tasks, as an example +35% on OpenMP, +226% on Cilk, +96% on TBB for the alignment of protein P01111 against UniProt DB using Smith-Waterman algorithm. |

Galilei, Giacomo A.
Annotations are a recent feature introduced in languages such as Java, C#, and other languages of the .NET family, which allow programmers to attach arbitrary, structured and typed metadata to their code. These languages run on top of so-called virtual execution environments, e.g. the JVM for Java, and the CLR for .NET languages, which allow for the run-time generation of executable code. In this report we explore how annotations and the dynamic code generation capability can be used together to provide programmers with high-level methods for dynamic generation and modification of an application’s code — at run-time. The report describes the framework @Java system, the Java libraries it is constituted by, and the @Java language, which is an extension to Java allowing annotation of arbitrary statements. The strategy we developed consists in parsing a source file written using @Java language to produce a Java 5 compatible source file. Once compiled, information about annotated statements,such as their bytecode instructions, can be recovered at run-time. We developed two libraries, one that works at low level to do generic bytecode engineering called JDAsm, and one at high level called JCodeBrick.Together, these two libraries allow type-safe and totally symbolic runtime code modification and generation without any need to explicitly address bytecode instructions, letting a programmer to easily manipulate existing classes or to synthesize new ones, by inserting, deleting or extruding pieces of code. After an overview of the Annotations in Section 2, we address the @Java language and its parser in Section 3. Then a full description of the two libraries for bytecode engineering and code manipulation follows in Section 4 and 5. We introduce then in Section 6 a few motivating examples of applications of @Java, and provide a more developed case study in Section7.The last section offers conclusions and ideas for future work. The documentation of the two libraries is provided in Appendix A, completing the report. |

Menconi, Giulia and Puliti, Aldamaria and Sbrana, Isabella and Conti, Valerio and Marangoni, Roberto
This paper presents a bottom up strategy to detect features in genomic sequences. |

Romei, Andrea and Turini, Franco
XML is the standard language for representing semi-structured data. With the spreading of XML sources, mining XML data can be an important objective in the near future. This paper presents a project focussed on designing a general-purpose query language in support of mining XML data. In our framework, raw data, mining models and domain knowledge are represented by way of XML documents and stored inside XML native databases. Data mining tasks are expressed in an extension of XQuery. Special attention is given to the frequent pattern discovery problem, and a way of exploiting domain-dependent optimizations and efficient data structures as deeper as possible in the extraction process is presented. We report the results of a first bunch of experiments, showing that a good trade-off between expressiveness and efficiency in XML data mining is not a chimera. |

Bonchi, Filippo and Gadducci, Fabio and Monreale, Giacoma Valentina
The paper presents a case study on the synthesis of labelled transition systems (LTSs) for process calculi, choosing as testbed Cardelli and Gordon's Mobile Ambients (MAs). The proposal is based on a graphical encoding: each process is mapped into a graph equipped with suitable interfaces, such that the denotation is fully abstract with respect to the usual structural congruence. Graphs with interfaces are amenable to the synthesis mechanism proposed by Ehrig and Koenig and based on borrowed contexts (BCs), an instance of relative pushouts, introduced by Leifer and Milner. The BC mechanism allows the effective construction of a LTS that has graphs with interfaces as both states and labels, and such that the associated bisimilarity is automatically a congruence. Our paper focuses on the analysis of a LTS over (processes as) graphs with interfaces, as distilled by exploiting the graphical encoding of MAs. In particular, we use the LTS on graphs to recover a suitable LTS directly defined over the structure of MAs processes. |

Degano, Pierpaolo and Gorrieri, Roberto
This note contains the abstracts of the seven posters presented the 7th Conference on Computational Methods in Systems Biology, CMSB 2009, held in Bologna, August 31st - September 1st, 2009. |

Aldinucci, Marco and Danelutto, Marco and Kilpatrick, Peter
We introduce and address the problem of concurrent autonomic management of different non-functional concerns in parallel applications build as a hierarchical composition of behavioural skeletons. We first define the problems arising when multiple concerns are dealt with by independent managers, then we propose a methodology supporting coordinated management, and finally we discuss how autonomic management of multiple concerns may be implemented in a typical use case. The paper concludes with an outline of the challenges involved in realizing the proposed methodology on distributed target architectures such as clusters and grids. Being based on the behavioural skeleton concept proposed in the CoreGRID GCM, it is anticipated that the methodology will be readily integrated into the current reference implementation of GCM based on Java ProActive and running on top of major grid middlewares. |

Bellia, Marco and Occhiuto, Maria Eugena
The paper adds a mechanism of {\em closure} to Java. We apply to closures the same technique we exploited in extending Java with methods as parameters \cite{bellia2008,bellia2008c} and we obtain a formal definition and a prototype of Java with closures. The formal definition consists of a set of source to source translation rules that state the meaning of the new construct in terms of compositions of well known ordinary Java structures. Two variants of the transformation are discussed to allow recursive defined closures and other mechanisms. The notion of {\em shared variable} as a local variable that is allocated in the heap is also discussed. Eventually, since the resulting transformation is one pass process, it can be implemented through a source to source, one pass, preprocessor \cite{bellia2007,bellia2008b}, easy to write using standard development tools \cite{levine1995,donnely2006}. |

Del Corso, Gianna M. and Romani, Francesco
In this paper, we first give a quick review of the most used numerical indicators for evaluating research, and then we present an integrated model for ranking scientific publications together with authors and journals. Our model relies on certain adiacency matrices obtained from the relations of citation, authorship and publication. These matrices are first normalized to obtain stochastic matrices and then are combined together by means of weights to form a suitable irreducible stochastic matrix whose dominant eigenvector provides the ranking. We discuss various strategies for choosing the weights and we show on large synthetic datasets how the choice of a weighting criteria rather than another can change the behavior of our ranking algorithm. |

Corradini, Andrea
PREFACE TERMGRAPH 2009 took place in York (UK) on March 22, 2009, as a one-day satellite event of ETAPS 2009. Previous editions of the TERMGRAPH workshops series took place in Barcelona (2002), in Rome (2004), in Vienna (2006), and in Braga (2007). The advantage of computing with graphs rather than terms (strings or trees) is that common subexpressions can be shared, which improves the efficiency of computations in space and time. Sharing is ubiquitous in implementations of programming languages: many implementations of functional, logic, object-oriented and concurrent calculi are based on term graphs. Term graphs are also used in symbolic computation systems and automated theorem proving. The aim of TERMGRAPH 2009 was to bring together researchers working in these different domains and to foster their interaction, to provide a forum for presenting new ideas and work in progress, and to enable newcomers to learn about current activities in term graph rewriting. Topics of interest for the workshop are all aspects of term graphs and sharing of common subexpressions in rewriting, programming, automated reasoning and symbolic computation. This includes (but is not limited to) term rewriting, graph transformation, programming languages, models of computation, graph-based languages, semantics and implementation of programming languages, compiler construction, pattern recognition, databases, bioinformatics, and system descriptions. This report contains the eight contributions presented during the workshop: they were selected by the Program Committee according to originality, significance, and general interest. In addition to these presentations, the programme included two invited lectures by Helene Kirchner and Fabio Gadducci. |

Felicioli, C. and Marangoni, Roberto
There are several important reasons (biological, evolutionary, clinical, etc.) to give a segment-based description of genomic sequences, and, in particular, to detect repeated segments, written both direct and complemented inverted. In some applications, in particular in medical genomics, it is also necessary to count the number of occurrences of a segment. Moreover, by detecting common segments shared by two different sequences it is possible to define a sort of genomic distance between them. Here we propose BpMatch: an algorithm that, working on a suitably modified suffix-tree data structure, allows us to achieve all these three goals (identify repeated segments, including the complemented inverted copies of them, count repeats number and calculate genomic distance) in a fast and efficient way.BpMatch is able to identify exact copies (and complemented inverted copies) of a segment. The operator should define a priori the minimum length of a string, in order to be considered a segment, and the minimum number of occurrences, so that only segments having a number of occurrences greater than are considered to be significant. BpMatch is very efficient; we determined the complexity in time to calculate the self-covering of a string, the alphabet dimension. On the worst case, assuming the alphabet dimension is a constant, the time required to calculate the coverage is O. On the average, using the time required to calculate the coverage is only O. It is important to note that this estimation includes the time required to complete all of the three different tasks: to identify copied segments, to localize them, to count the number of occurrences and to evaluate the sequence coverage. |

Battaglia, Giovanni and Cangelosi, Davide and Grossi, Roberto and Pisanti, Nadia
In this paper, we introduce a new notion of motifs, called \emph{masks}, that succinctly represent the repeated patterns for an input sequence $T$ of $n$ symbols drawn from an alphabet $\Sigma$. We show how to build the set of all maximal masks of length~$L$ and quorum~$q$, in $O(2^L n)$ time and space in the worst case. We analytically show that our algorithms perform better than constant-time enumerating and checking all the potential $(|\Sigma|+1)^L$ candidate patterns in~$T$ after a polynomial-time preprocessing of $T$. Our algorithms are also cache-friendly, attaining $O(2^L\, \mathit{sort}(n))$ block transfers, where $\mathit{sort}(n)$ is the cache oblivious complexity of sorting $n$ items. |

Corradini, Andrea and Gadducci, Fabio
After having joined forces with the International Workshop on Coalgebraic Methods in Computer Science (CMCS) for the Second International Conference on Algebra and Coalgebra in Computer Science (CALCO 2007), the Nineteenth International Workshop on Algebraic Development Techniques (WADT 2008) was held as an individual workshop and in its traditional form in Pisa, Italy, from the 13th to the 16th of June 2008. This report contains the thirtythree abstracts presented during the workshop: they were selected by the Steering Committee on the basis of the submitted abstracts according to originality, significance, and general interest. In addition to the presentations of ongoing research results, the programme included three invited lectures by Egon Boerger (Dipartimento di Informatica, Pisa), Luca Cardelli (Microsoft Research, Cambridge) and Stephen Gilmore (Laboratory for Foundations of Computer Science, Edinburgh). |

Frangioni, Antonio and Gentile, Claudio
The Perspective Relaxation is a general approach for constructing tight approximations to MINLP problems with semicontinuous variables. Two different reformulations have been proposed for solving it, one resulting in a Second-Order Cone Program, the other based on representing the perspective function by (an infinite number of) cutting planes. We compare the two reformulations on two sets of test problems to determine which one is most effective in the context of exact or approximate Branch-and-Cut algorithms. |

Bernasconi, Anna and Ciriani, Valentina and Cordone, Roberto
Fast minimization time, compact area and low delay are important issues in logic circuit design. In order to orchestrate these main goals, in this paper we propose a new four level logic form ({\em Generalized EXOR-Projected Sum of Products}, in short {\em GEP-SOP}) with low delay and compact area. Even if the problem of finding an optimal GEP-SOPs is computationally hard, we propose an efficient approximation algorithm that gives guaranteed near optimal solutions. A wide set of experimental results confirms that the GEP-SOP forms are often more compact than the SOP forms, and their synthesis is always a very fast reoptimization phase after SOP minimization. |

Bartoletti, Massimo and Degano, Pierpaolo and Ferrari, Gianluigi and Zunino, Roberto
We introduce weak binders, a lightweight construct to deal with fresh names in nominal calculi. Weak binders do not define the scope of names as precisely as the standard nu-binders, yet they enjoy strong semantic properties.We provide them with a denotational semantics, an equational theory, and a trace inclusion preorder. Furthermore, we present a trace-preserving mapping between weak binders and nu-binders. |

Cioni, Lorenzo
The presente Technical Report contains a few notes about the roles of System Dynamics for the solution of environmental problems. It is not an introduction to System Dynamics. More about the content in the abstract. Reports of errors and inaccuracies are gratefully appreciated. |

Chessa, Stefano and Albano, Michele
In-network storage of data in wireless sensor networks (such as DCS-GHT, for instance) is considered a viable alternative to external storage since it contributes to reduce the communications inside the network and to favor data aggregation. In current approaches it exploits pure data replication to assure data availability. In this paper, we consider the use of codes and data dispersal in combination to in-network storage. In particular, given an abstract model of in-network storage we show how codes can be employed, and we discuss how this can be achieved in three cases of study. Since the configuration of the network is particularly critical with respect to correct data encoding, we define framework aimed at evaluating the probability of correct data encoding and decoding. Then we exploit this result and simulations to show how, in the cases of study, the parameters of the codes and the network should be configured in order to achieve correct data coding and decoding with high probability. |

Bini, Dario A. and Del Corso, Gianna M. and Romani, Francesco
An integrated model for ranking scientific publications together with authors and journals recently presented in [Bini, Del Corso, Romani,ETNA 2008]is closely analyzed. The model, which relies on certain adjacency matrices $H,K$ and $F$ obtained from the relations of citation,authorship and publication, provides the ranking by means of the Perron vector of a stochastic matrix obtained by combining $H,K$ and $F$. Some perturbation theorems concerning the Perron vector previously introduced by the authors are extended to more general cases and a counterexample to a property previously addressed by the authors is presented. The theoretical results confirm the consistency and effectiveness of our model. Some paradigmatic examples are reported together with some results obtained on a real set of data. |

Ciancia, Vincenzo and Ferrari, Gianluigi and Guanciale, Roberto and Strollo, Daniele
An important issue of the service oriented approach is the possibility to aggregate, through programmable coordination patterns, the activities involved by service interactions. Two different approaches can be adopted to tackle service coordination: orchestration and choreography. In this paper, we introduce a formal methodology for the with the aim of handling coordinationamong services from the perspective of a global observer in the spirit of choreography models. In particular, we address the problem of verifying compliance and consistency between the design of service interactions and the choreography constraints. |

Cioni, Lorenzo
The present Technical Report contains two papers that have been accepted at the S.I.N.G. 4 Conference, Wroklav, Poland, 26-28 June 2008. Both papers have been also presented in seminars at the Computer Science Department, the former on June 3 2008 and the latter on January 23 2008. The first paper presents the use of auction mechanisms for the allocation of chores instead of goods. The latter paper presents and analyzes a small set of models that can be used by two players for the barter of goods without the intervention of any numerary good. |

Bruni, Roberto and Mezzina, Leonardo Gaetano
The notion of a session is fundamental in service oriented applications, as it serves to separate interactions between different instances of the same services, and to group together logical units of work. Recently, SCC has been proposed as a calculus centered around the concept of a dyadic session, where service interaction protocols and service orchestration can be conveniently expressed. In this paper we propose a generic type system to collect services' behaviors and then we fix a class of well typed processes that are guaranteed to be deadlock free. The type system is based on previous research on traditional mobile calculi, here conveniently extended and simplified thanks to the neat discipline imposed by the linguistic primitives of SCC. |

Zunino, Roberto and Bartoletti, Massimo and Degano, Pierpaolo and Ferrari, Gianluigi
We propose a model for specifying, analysing and enforcing safe usage of resources.Our usage policies allow for parametricity over resources, and they can be enforced through finite state automata. The patterns of resource access and creation are described through a basic calculus of usages. In spite of the augmented flexibility given by resource creation and by policy parametrization, we devise an efficient (polynomial-time)model-checking technique for deciding when a usage is resource-safe,i.e. when it complies with all the relevant usage policies. |

Bartoletti, Massimo and Zunino, Roberto
We introduce LocUsT, a tool to statically check whether a given resource usage complies with a local policy. LocUsT takes as input an abstraction of the behaviour of a program, called a usage. Usages are expressed in a simple process calculus, and over-approximate all the resource accesses of the program itself. As additional input, LocUsT takes a policy that defines the allowed resource access patterns, represented through a finite state automaton parametrized over resources. Finally, LocUsT decides whether some trace of the given usage violates some instantiation of the policy. |

Aldinucci, Marco and Tuosto, Emilio
Autonomic management can improve the QoS provided by parallel/distributed applications. Within the CoreGRID Component Model, the autonomic management is tailored to the automatic - monitoring-driven - alteration of the component assembly and, therefore, is defined as the effect of (distributed) management code. This work yields a semantics based on hypergraph rewriting suitable to model the dynamic evolution and non-functional aspects of Service Oriented Architectures and component-based autonomic applications. In this regard, our main goal is to provide a formal description of adaptation operations that are typically only informally specified. We contend that our approach makes easier to raise the level of abstraction of management code in autonomic and adaptive applications. |

Bonchi, Filippo and Brogi, Antonio and Corfini, Sara and Gadducci, Fabio
Web services represent a promising technology for the development of distributed heterogeneous software systems. In this setting, a major issue is to establish whether two services can be used interchangeably in any context. This paper illustrates - through a concrete scenario from banking systems - how a suitable notion of behavioural equivalence over Petri nets can be effectively employed for checking the correctness of service specifications and the replaceability of (sub)services. |

Bini, Dario A. and Del Corso, Gianna M. and Romani, Francesco
Some integrated models for ranking scientific publications together with authors and journals are presented and analyzed. The models rely on certain adiacency matrices obtained from the relations of citation,authorship and publication, which concurr to forming a suitable irreducible stochastic matrix whose Perron vector provides the ranking. Some perturbation theorems concerning the Perron vector of nonnegative irreducible matrices are proved. These theoretical results provide a validation of the consistency and effectiveness of our models. Several paradigmatic examples are reported together with some results obtained on a real set of data. |

Bodei, Chiara and Gao, Han and Degano, Pierpaolo
A simple type confusion attack occurs in a security protocol, when a principal interprets data of one type as data of another. These attacks can be successfully prevented by tagging types of each field of a message. Complex type confusions occur instead when tags can be confused with data and when fields or sub-segments of fields may be confused with concatenations of fields of other types. Capturing these kinds of confusions is not easy in a process calculus setting, where it is generally assumed that messages are correctly interpreted. In this paper, we model in the process calculus LySa only the misinterpretation due to the confusion of a concatenation of fields with a single field, by extending the notation of one-to-one variable binding to multi-to-one binding. We further present a formal way of detecting these possible misinterpretations, based on a Control Flow Analysis for this version of the calculus. The analysis over-approximates all the possible behaviour of a protocol, including those effected by these type confusions. As an example, we considered the amended Needham-Schroeder symmetric protocol, where we succeed in detecting the type confusion that lead to a complex type flaw attacks it is subject to. Therefore, the analysis can capture potential type confusions of this kind on security protocols, besides other security properties such as confidentiality, freshness and message authentication. |

Frangioni, Antonio and Zhang, Qinghua
The paper discusses a general framework for outer approximation type algorithms for the canonical DC optimization problem. The algorithms rely on a polar reformulation of the problem and exploit an approximated oracle in order to check global optimality. Consequently, approximate optimality conditions are introduced and bounds on the quality of the approximate global optimal solution are obtained. A thorough analysis of properties which guarantee convergence is carried out; two families of conditions are introduced which lead to design six implementable algorithms, whose convergence can be proved within a unified framework. |

Frangioni, Antonio and d'Antonio, Giacomo
Subgradient methods for constrained nondifferentiable problems benefit from projection of the search direction onto the (normal cone of) the feasible set prior to computing the steplength, that is, from the use of conditional subgradient techniques. In general, subgradient methods also largely benefit from deflection, i.e., defining the search direction as a (convex) combination of the previous direction and the current subgradient. However, combining the two techniques is not straightforward, especially if an inexact oracle is available which can only compute approximate function values and subgradients. We present a convergence analysis of several different variants, both conceptual and implementable, of approximate conditional deflected subgradient methods. Our analysis extends the available results in the literature by using the main stepsize rules presented so far while allowing deflection in a more flexible way; to allow for (diminishing/square summable) rules where the stepsize is tightly controlled a-priori, we propose a new class of deflection-restricted approaches where it is the deflection parameter, rather than the stepsize, which is dynamically adjusted using the "target value" of the optimization sequence. For both Polyak-type and diminishing/square summable stepsizes, we propose a "correction" of the standard formula which shows that, in the inexact case, knowledge about the error computed by the oracle (which is available in several practical applications) can be exploited in order to strengthen the convergence properties of the method. |

Bodei, Chiara and Dung, Dinh and Gian Luigi, Ferrari
We outline the design of a framework for modelling <br />cloud computing systems.<br />The approach is based on a declarative programming model which takes the form of a lambda-calculus enriched with suitable mechanisms to express and enforce application-level security policies governing usages of resources available in the clouds. <br />We will focus on the server side of cloud systems, by adopting a pro-active approach, where explicit security policies regulate server's behaviour. <br /> |

Cioni, Lorenzo
The present Technical Report contains a short analysis of both the Borda and the Condorcet methods. We start with a description of both methods in their twofold roles: of methods for defining a winning alternative from a set of available alternatives,} of possible outranking methods over a set of alternatives. We then discuss the properties of each method and analyze them with regard to the conditions posed by the Arrow's impossibility theorem. The last step, that represents the core part of the Technical Report, is represented by the search of possible relations between the two methods. The Technical Report closes with the proposal of a composed outranking method based on both the Borda and the Condorcet methods executed on the same profiles of preferences. As usual, reports of errors and inaccuracies are gratefully appreciated. |

Cioni, Lorenzo
In this Technical Report we describe a type of auction mechanism where the auctioneer <strong>A</strong> wants to auction an item among a certain number of bidders of a set <strong>B</strong> that submit bids in the auction with the aim of not getting that item. Owing to this feature we call this mechanism an <strong>inverse</strong> or <strong>negative</strong> auction. The main motivation of this mechanism is that both the bidders and the auctioneer give a negative value to the auctioned item (and so they see it as a bad rather than a good). The mechanism is presented in its basic simple version and with some possible extensions that account for the payment of a fee for not attending the auction, the interactions among the bidders and the presence of other supporting actors. |

Aldinucci, Marco and Bracciali, Andrea and Liò, Pietro and Sorathiya, Anil and Torquati, Massimo
The stochastic modelling of biological systems is an informative, and in some cases, very adequate technique, which may however result in being more expensive than other modelling approaches, such as differential equations. We present StochKit-FF, a parallel version of StochKit, a reference toolkit for stochastic simulations. StochKit-FF is based on the FastFlow programming toolkit for multicores and exploits the novel concept of selective memory. We experiment StochKit-FF on a model of HIV infection dynamics, with the aim of extracting information from efficiently run experiments, here in terms of average and variance and, on a longer term, of more structured data. |

Cisternino, Antonio and Ferragina, Paolo and Morelli, Davide and Coppola, Massimo
<p>It is common experience to upgrade firmware of mobile devices and obtain longer battery life, living proof of how software affects power consumption of a device. Despite this empirical observation, there is a lack for models and methodologies correlating computations with power consumption [3-5]. In this paper we propose an experimental approach to computational complexity and a methodology for conducting measures which result independent of the underlying system running the algorithm/software to be tested. Early experimental results are presented and discussed, showing that our methodology is robust and can be used in many settings. We also introduce the foundations of a theory for experimental algorithm complexity, which mimics what is predicted by the classic theory of computational complexity (big-O or Theta notations), except for some notable exceptions that we highlight and comment. This theory is validated in many scenarios, by considering several architectures and algorithms. Because of the relation between time complexity and energy consumption, we may suggest that our work measures the “information work”: namely, the energy required for performing information processing.</p><p /> |

Bernasconi, Anna and Ciriani, Valentina and Luccio, Fabrizio and Pagli, Linda
We give a new heuristic for Disjoint Sum-of-Products (DSOP) minimization of a Boolean function $f$, based on a new criterion for product selection. Starting from a Sum-of-Products (SOP) $S$ of $f$, i.e., a set of cubes covering the minterms of $f$, we assign a weight $w(p)$ to each product (cube) $p$ in $S$, where $w(p)$ depends on the intersection of $p$ with the other cubes of $S$. We assign higher weights to the cubes that, if selected in a DSOP, would generate a higher fragmentation of the other cubes. Disjoint cubes are then selected for increasing value of $w$ and decreasing size, recomputing the SOP on the residual function at different possible stages as a trade-off between quality of result and computational time. We also propose a heuristic for computing partial DSOP forms, i.e., SOP forms whose cubes cover exactly once a given subset of minterms of the function, and more than once the remaining minterms.A set of experiments conducted on major benchmark functions show that our method, with all its variants, always generates better results than the ones of previous heuristics, including the one based on a BDD representation of $f$. These experiments have also outlined how re-synthesizing the residual function in SOP form seems to be crucial for obtaining compact DSOPs. |

Bigi, Giancarlo and Adela, Capata and Gabor, Kassay
New existence results for the strong vector equilibrium problem are presented, relying on a well-known separation theorem in infinite dimensional spaces. The main results are applied to strong cone saddle-points and strong vector variational inequalities providing new existence results, and furthermore they allow to recover an earlier result from the literature. <br /> |

Dittamo, Cristian and Gervasi, Vincenzo and Boerger, Egon and Cisternino, Antonio
Web applications (i.e., applications whose interface is presented to the user via a web browser, whose state is split between a server and a client, and where the only interaction between server and client is through the HTTP protocol) are become more and more widespread, and integrated in most users' everyday work habits. The glue linking the desperate technologies involved, from dynamic HTML to XML RPC, is the Javascript language. Yet, no formal definition of its semantics exists, which in turn makes it impossible to formally prove correctness, liveness and security properties of Web applications. As a first step towards improving this situation, we provide a formal semantics for ECMAScript (standard ECMA-262), by means of an Abstract State Machines (ASMs) specification. We follow the path established by other specification efforts for similar languages (e.g. Java and C#), but in addition we establish a formal trace between parts of our specification and the ECMA standard, thus facilitating the proof of correctness of the specification. More specifically, we define the dynamic semantics of ECMAScript by providing (in terms of ASMs) an interpreter which executes ECMAScript programs. We use an algebra to represent the state of the ECMAScript program, the state of the ECMAScript interpreter, and the state of the host environment (typically, the browser). We assume the necessary syntactic information (namely, the annotated Abstract Syntax Tree (AST) of the given program) to be available. Using ASM we detail a visit of the AST, whose effect is to simulate the execution of the program. In the ECMA standard the behavior of ECMAScript constructs is operationally described using so-called abstract operations (AO), coming as numbered lists of steps. Therefore our interpreter is composed out of two submachines: ECMA-Script and ECMA-AO interpreters. The former invokes the ECMA-AO one to evaluate nodes, simulating AOs used in ECMA standard to describe the production's semantics. We organize the ECMA-AO interpreter rules in such a way that each rule application corresponds to an AO instruction in the ECMA standard. This helps to check the correctness of the ASM model wrt the ECMA standard as well as the correctness of implementations wrt ASM model. Therefore, the state of our interpreter can be mapped via abstraction/refinement functions onto the concrete state of any comformant implementation. |

Gallicchio, Claudio and Micheli, Alessio and Barsocchi, Paolo and Chessa, Stefano
Real-time, indoor user localization, although limited to the current user position, is of great practical importance in many Ambient Assisted Living (AAL) applications. Moreover an accurate prediction of the user next position (even with a short advice) may open a number of new AAL applications that could timely provide the right services in the right place even before the user request them. However, the problem of forecasting the user position is complicated due to the intrinsic difficulty of localization in indoor environments, and to the fact that different paths of the user may intersect at a given point, but they may end in different places. We tackle with this problem by modeling the localization information stream obtained from a Wireless Sensor Network (WSN) using Recurrent Neural Networks implemented as efficient Echo State Networks (ESNs), within the Reservoir Computing paradigm. In particular, we have set up an experimental test-bed in which the WSN produces localization information of a user that moves along a number of different paths, and in which the ESN collects localization information to predict a future position of the user at some given mark points. Our results show that, with an appropriate configuration of the ESN, the system reaches a good accuracy of the prediction also with a small WSN, and that the accuracy scales well with the WSN size. Furthermore, the accuracy is also reasonably robust to variations in the deployment of the sensors. For these reason our solution can be configured to meet the desired trade-off between cost and accuracy. |

Luong, Binh Thanh and Ruggieri, Salvatore and Turini, Franco
<p>With the support of the legally-grounded methodology of situation<br />testing, we tackle the problems of discrimination discovery and<br />prevention from a dataset of historical decisions by adopting a<br />variant of k-NN classification. A tuple is labeled as<br />discriminated if we can observe a significant difference of<br />treatment among its neighbors belonging to a protected-by-law<br />group and its neighbors not belonging to it. Discrimination<br />discovery boils down to extracting a classification model from the<br />labeled tuples. Discrimination prevention is tackled by changing<br />the decision value for tuples labeled as discriminated before<br />training a classifier. The approach of this paper overcomes legal<br />weaknesses and technical limitations of existing proposals.</p> |

Cioni, Lorenzo
<p>Candle auctions have been used in the past as a variant of the English auction with a random termination time associated either to the going out of a candle or to the falling of a needle inserted in a random position in a burning candle.<br />In this case such auctions are used by the auctioneer A for the allocation of a <strong>good</strong> to one of the n bidders b<sub>i</sub> of the set B.<br />Our basic motivation for the use of this type of auctions is the following.<br />We are planning to use such auctions for the allocation of a <strong>chore</strong> at one b<sub>i</sub> from the set B whose members have been selected by A using a set of private criteria that do not depend on the willingness to attend of the single bidders.<br />The to be selected bidder has to be chosen from the set B given that the available information about these bidders are imprecise or fuzzy. These features prevent the profitable and direct selection of a suitable bidder with the guarantee of choosing the best one.<br />For this reason we plan to adopt an auction mechanism where the bidders pay for not getting the chore but one of them has to get it though he also receives a compensation for being the <strong>wining bidder</strong>.<br />The compensation to the winning bidder derives him form the other bidders, the so called <strong>losing bidders</strong>, and is accumulated during the various steps or rounds on which the auction is based.</p><p /> |

Torquati, Massimo
Using efficient point-to-point communication channels is critical for implementing fine grained parallel program on modern shared cache multi-core architectures.<br />This report discusses in detail several implementations of wait-free Single-Producer/Single-Consumer queue (SPSC), and presents a novel and efficient algorithm for the implementation of an unbounded wait-free SPSC queue (uSPSC). The correctness proof of the new algorithm, and several performance measurements based on simple synthetic benchmark and microbenchmark, are also discussed. |

Aldinucci, Marco and Ruggieri, Salvatore and Torquati, Massimo
The whole computer hardware industry embraced multicores. For these machines, the extreme optimisation of sequential algorithms is no longer sufficient to squeeze the real machine power, which can be only exploited via thread-level parallelism. Decision tree algorithms exhibit natural concurrency that makes them suitable to be parallelised. This paper presents an approach for easy-yet-efficient porting of an implementation of the C4.5 algorithm on multicores. The approach is based on the FastFlow parallel programming environment. The strength of our porting consists in minimal changes to the original sequential code. In addition to the tree building algorithm, we consider also the so far unaddressed problem of parallelising the error-based pruning with grafting algorithm of C4.5. We devise lower bounds for the forms of parallelisations adopted, and achieve performances close to such bounds. |

Bigi, Giancarlo and Passacantando, Mauro
The paper deals with equilibrium problems (EPs) with nonlinear convex constraints. First, EP is reformulated as a global optimization problem introducing a class of gap functions, in which the feasible set of EP is replaced by a polyhedral approximation. Then, an algorithm is given for solving EP through a descent type procedure related to exact penalties of the gap functions and its global convergence is proved. Finally, the algorithm is tested on a network oligopoly problem with nonlinear congestion constraints<br /> |

Frangioni, Antonio and Gorgone, Enrico
We propose a modification to the (generalized) bundle scheme for minimization of a convex nondifferentiable sum-function in the case where some of the components are ''easy'', that is, they are Lagrangian functions of explicitly known convex programs with ''few'' variables and constraints. This happens in many practical cases, particularly within applications to combinatorial optimization. In this case one can insert in the master problem a suitably modified representation of the original convex subproblem, as an alternative to iteratively inner-approximating it by means of the produced extreme points, thus providing the algorithm with exact information about a part of the objective function, possibly leading to performance improvements. This is shown to happen in at least one relevant application: the computation of tight lower bounds for Fixed-Charge Multicommodity Min-Cost Flow problems. |

Buscemi, Maria Grazia and Coppo, Mario and Dezani-Ciancaglini, Mariangiola and Montanari, Ugo
This paper focuses on client-service interactions distinguishing between three phases: negotiate, commit and execute. The participants negotiate their behaviours, and if an agreement is reached they commit and start an execution which is guaranteed to respect the interaction scheme agreed upon. These ideas are materialised through a calculus of contracts enriched with semiring-based constraints, which allow clients to choose services and to interact with them in a safe way. A concrete representation of these constraints with logic programs and logic program combinations is straightforward, thus reducing constraint solution (and consequently the establishment of a contract) to the execution of a logic program.<br /><br /> |

Cioni, Lorenzo
This technical report presents a new partial order of a set A of alternatives according to a set of criteria C.<br />The order is based on a binary strict preference relation that is not transitive but that satisfies asymmetry. The order corresponds to a directed graph as an aiding tool to allow the single decision maker to select the best alternative from the set A. |

Aldinucci, Marco and Danelutto, Marco and Torquati, Massimo
FastFlow is a structured parallel programming framework targeting shared memory multicore architectures. Its layered design and the optimized implementation of the communication mechanisms used to implement the FastFlow streaming networks provided to the application programmer as algorithmic skeletons support the development of efficient fine grain applications.<br />FastFlow is available at http://sourceforge.net/projects/mc-fastflow/. <br />This work introduces FastFlow programming techniques and points out the different ways used to parallelize existing C/C++ code using FastFlow as a software accelerator. In short: this is a kind of tutorial on FastFlow.<br /> |

Pagli, Linda and Viglietta, Giovanni
In this paper we study the Near-Gathering problem for a set of asynchronous, anonymous, oblivious and autonomous mobile robots with limited visibility moving in Look-Compute-Move (LCM) cycles: In this problem, the robots have to get close enough to each other, so that every robot can see all the others, without touching (i.e., colliding) with any other robot. <br />The importance of this problem might not be clear at a first sight: Solving the Near-Gathering problem, it is possible to overcome the limitations of having robots with limited visibility, and it is therefore possible to exploit all the studies (the majority, actually) done on this topic, in the unlimited visibility setting. In fact, after the robots get close enough, they are able to see all the robots in the system, a scenario similar to the one where the robots have unlimited visibility. Here, <br />we present a collision-free algorithm for the Near-Gathering problem, the first to our knowledge, that allows a set of autonomous mobile robots to nearly gather within finite time. The collision-free feature of our solution is crucial in order to combine it with an unlimited visibility protocol. In fact, the majority of the algorithms that can be found on the topic assume that all robots occupy distinct positions at the beginning. Hence, only providing a collision-free Near-Gathering algorithm, as the one presented here, is it possible to successfully combine it with an unlimited visibility protocol, hence overcoming the natural limitations of the limited visibility scenario.<br />In our model, distances are induced by the infinity norm. A discussion on how to extend our algorithm to models with different distance functions, including the usual Euclidean distance, is also presented. |

Montanari, Ugo and Sammartino, Matteo
Traditional process calculi usually abstract away from network details, modeling only communication over shared channels. They, however, seem inadequate to describe new network architectures, such as Software Defined Networks, where programs are allowed to manipulate the infrastructure. In this paper we present a network conscious, proper extension of the pi-calculus: we add connector names and the primitives to handle them, and we provide both an interleaving and a concurrent semantics. The extension to connector names is natural and seamless, since they are handled in full analogy with ordinary names. In the interleaving case, observations are the routing paths through which sent and received data are transported, while in the concurrent case we allow to observe multisets of paths. However, restricted connector names do not appear in the observations, which thus can possibly be as abstract as in the pi-calculus. Finally, for the concurrent semantics we show that bisimilarity is a congruence, and this property holds also for the concurrent version of the pi-calculus. |

Bigi, Giancarlo and Frangioni, Antonio and Zhang, Qinghua
We propose a novel generalization of the canonical DC problem (CDC), and we study the convergence of outer approximation algorithms for its solution which use an approximated oracle for checking the global optimality conditions. Although the approximated optimality conditions are similar to those of CDC, this new class of problems is shown to significantly differ from its special case. Indeed, outer approximation approaches for CDC need be substantially modified in order to cope with the more general problem, bringing to new algorithms. We develop a hierarchy of conditions that guarantee global convergence, and we build three different cutting plane algorithms relying on them.<br /> |

Astorino, Annabella and Frangioni, Antonio and Fuduli, Antonio and Gorgone, Enrico
We discuss a numerical algorithm for minimization of a convex nondifferentiable function belonging to the family of proximal bundle methods. Unlike all of its brethren, the approach does not rely on measuring descent of the objective function at the so-called ``serious steps'', while ``null steps'' only serve at improving the descent direction in case of unsuccessful steps. Rather, a merit function is defined which is decreased at each iteration, leading to a (potentially) continuous choice of the stepsize between zero (the null step) and one (the serious step). By avoiding the discrete choice the convergence analysis is simplified, and we can more easily obtain efficiency estimates for the method. Simple choices for the step selection actually reproduce the dichotomic 0/1 behavior of standard proximal bundle methods, but shedding new light on the rationale behind the process, and ultimately with different rules. Yet, using nonlinear upper models of the function in the step selection process can lead to actual fractional steps. |

Vanneschi, Marco and Mencagli, Gabriele
A central issue for parallel applications executed on heterogeneous distributed platforms (e.g. Grids and Clouds) is assuring that performance and cost parameters are optimized throughout the execution. A solution is based on providing application components with adaptation strategies able to select at run-time the best component configuration. In this report we will introduce a preliminary work concerning the exploitation of control-theoretic techniques for controlling the Quality of Service of parallel computations. In particular we will demonstrate how the model-based predictive control strategy can be used based on first-principle performance models of structured parallelism schemes. We will also evaluate the viability of our approach on a first experimental scenario.<br /> |

Bellia, Marco and Occhiuto, Maria Eugena
\Lname\ is a minimal core calculus that extends Featherweight (generic) Java, \FGJ, with lambda expressions. It has been used to study properties of Simple Closure in Java, including type safety and the abstraction property. Its formalization is based on a reduction semantics and a typing system that extend those of \FGJ. ${\cal F}$ is a source-to-source, translation rule system from Java 1.5 extended with lambda expressions back to ordinary Java 1.5. It has been introduced to study implementation features of closures in Java, including assignment of non local variables and relations with anonymous class objects. In this paper we prove that the two semantics commute. <br /><br /> |

Romei, Andrea and Ruggieri, Salvatore
Most of the decisions in the today's knowledge society are taken<br />on the basis of historical data by extracting models, patterns,<br />profiles, and rules of human behavior in support of (automated)<br />decision making. There is then the need of developing models,<br />methods and technologies for modelling the processes of<br />discrimination analysis in order to discover and prevent<br />discrimination phenomena. In this respect, discrimination analysis<br />from data should build over the large body of existing legal and<br />economic studies. This paper intends to provide a<br />multi-disciplinary survey of the literature on discrimination data<br />analysis, including methods for data collection, empirical<br />studies, controlled experiments, statistical evidence, and their<br />legal requirements and grounds. We cover the following mainstream<br />research lines: labour economic models, (quasi-)experimental<br />approaches such as auditing and controlled experiments,<br />profiling-based approaches such as racial profiling and credit<br />markets, and the recently blooming research on knowledge discovery<br />approaches. |

Degano, Pierpaolo and Mezzetti, Gianluca and Ferrari, Gianluigi
Two classes of nominal automata, namely Usage Automata (UAs) and <br />Variable Finite Automata (VFAs) are considered to express resource con- <br />trol policies over program execution traces expressed by a nominal calcu- <br />lus (Usages). We ?rst analyse closure properties of UAs, and then show <br />UAs less expressive than VFAs. We ?nally carry over VFAs the symbolic <br />technique for model checking Usages against UAs, so making it possi- <br />ble to verify the compliance of a program with a larger class of security <br />properties. |

Romei, Andrea and Turini, Franco
XQuake is a language and system for programming data mining processes over native XML databases in the spirit of inductive databases. It extends XQuery to support KDD tasks. This paper focuses on the features required in the definition of the steps of the mining process. The main objective is to show the expressiveness of the language in handling mining operations as an extension of basic XQuery expressions. To this purpose, the paper offers an extended application in the field of analyzing web logs.<br /> |

Mencagli, Gabriele and Vanneschi, Marco
In this report we provide the fundamental results for applying a formal performance modeling of distributed parallel computations described as computation graphs of parallel modules. In our approach parallelism is expressed inside each module (based on structured parallelism schemes) and between different modules that can compose general graph structures. Our methodological approach, based on Queueing Theory and Queueing Network Theory foundations, provides the necessary tools for predicting the steady-state behavior of a parallel computation. <br /> |

Marco, Bellia and Occhiuto, Maria Eugena
The paper contains the technical complements of the paper, of the same authors, published in the book <span style="font-style: italic;">"Java in Academia and Research", iConcept Press Ltd, 2011,</span> that describes the extensions of Java 1.5 with Higher Order (H.O. for short) mechanisms. The present complements contain the complete tables of (i) The translation semantics of a form of closure for Java; (ii) The translation semantics of H.O. methods with mc\_parameters as arguments; (iii) The translation semantics of Java extended with the above mechanisms put together; (iv) Theorems, lemmas and proofs of all the properties of the mechanisms and of the semantics discussed in the paper.<br /> |

Aldinucci, Marco and Anardu, L and Danelutto, Marco and Kilpatrick, Peter and Torquati, Massimo
Data flow techniques have been around since the early ’70s when they were used in compilers for sequential languages. Shortly after their intro- duction they were also considered as a possible model for parallel comput- ing, although the impact here was limited. Recently, however, data flow has been identified as a candidate for efficient implementation of various programming models on multi-core architectures. In most cases, however, the burden of determining data flow “macro” instructions is left to the programmer, while the compiler/run time system manages only the ef- ficient scheduling of these instructions. We discuss a structured parallel programming approach supporting automatic compilation of programs to macro data flow and we show experimental results demonstrating the fea- sibility of the approach and the efficiency of the resulting “object” code on different classes of state-of-the-art multi-core architectures. The ex- perimental results use different base mechanisms to implement the macro data flow run time support, from plain pthreads with condition variables to more modern and effective lock- and fence-free parallel frameworks. Experimental results comparing efficiency of the proposed approach with those achieved using other, more classical parallel frameworks are also presented. |

Cioni, Lorenzo
In this technical report we present a method that can be used by a set of decison makers in order to rank a certain number of alternatives according to a given set of criteria. The method aims at producing a directed multigraph involving all the alternatives so that it is possible for the decison makers to identify the worst alternatives and the best alternatives. The worst alternatives are never selected by the decision makers that perform their final selection among teh best aletrnatives. <br /> |

Belazzougui, Djamal and Venturini, Rossano
Given a set of integer keys from a bounded universe along with associated data, the dictionary problem asks to answer two queries: membership and retrieval. Membership has to tell whether a given element is in the dictionary or not; retrieval has to return the data associated with the searched key. This paper studies three well-established relaxations of this basic problem: (Compressed) Static functions, Approximate membership and Relative membership. (Compressed) Static functions. In this relaxation, also known as retrieval-only dictionaries, we are given a set $S \subseteq U$ of $n$ integers and each of them has associated data from an alphabet of size $\sigma$. The problem asks to build a dictionary that, given a key $x\in S$, returns its associate data. Notice that, whenever $x \not \in S$, arbitrary data is returned. This problem has been widely studied in the past. Solutions to this problem have to carefully organize associated data, so that, they can be retrieved in constant time without the need of storing keys in $S$. Compressed static functions move a step forward: not only do not store the keys, but also achieve space complexities bounded in term of the entropy $H_0$ of the associated data. Such kind of solutions are very interesting mainly for two reasons. Firstly, being $nH_0$ at most $n\log \sigma$, these results are always at least as good as the uncompressed ones. Moreover, since the associated data often follow a skewed distribution, $nH_0$ could even become sublinear in $n$. The current best solutions are by Porat and Hreinsson et al. The first one requires $n\log \sigma +o(n)$ bits of space, while the second one uses $(1+\delta)nH_0+n \cdot \min(p_0+0.086,1.82(1-p_0))$ bits of space, where $p_0$ is the probability of the most frequent symbol and $\delta$ is a constant greater than $0$. Thus, the space complexities of these solutions are incomparable: the former has a sublinear overhead but is not compressed, while the latter is suboptimal due to the factor $(1+\delta)$ to multiply $H_0$ and has an overhead that may be $\Theta(n)$ depending on $p_0$. Our optimal scheme achieves the best of the two being the first known solution obtaining simultaneously constant query time, compressed space ($nH_0$), and sublinear overhead ($o(n)$). We strongly believe that these characteristics makes the use of static functions significantly more appealing for many applications. Approximate membership. The approximate membership problem has been studied for decades and the Bloom filter data structure~is probably the most popular and widely used technique solving it. With Bloom filters we can represent a set of $n$ integers by using $n\log\frac{1}{\epsilon} \log e$ bits of space with false positive probability ({\sf fpp}) $\epsilon$. Both its space and time complexities are non-optimal: space is a constant factor away from optimal, and query time is logarithmic in $\frac{1}{\epsilon}$. Constant time approximate membership data structures are able to achieve optimal $n \log \frac{1}{\epsilon} +o(n)$ bits of space only if $\frac{1}{\epsilon}$ is a power of two. The current best solution requires an $O(n)$ bits overhead in addition to the optimal space, for an arbitrary {\sf fpp}. Our optimal scheme is the first known solution having $o(n)$ overhead, for any value of $\epsilon$ such that $\log \frac{1}{\epsilon} = o(\log n / \log\log n)$, namely, any reasonable choice of $\epsilon$ in practice. Relative membership. This problem asks to solve a further relaxation of the membership query. Given two sets $S$ and $R$ with $S \subset R$, we must be able to distinguish whether a key belongs to $S$ or $R\setminus S$. Our compressed static functions are used to obtain a constant time solution for the problem achieving an optimal space complexity (up to lower order term). |

Bigi, Giancarlo and Castellani, Marco and Pappalardo, Massimo and Passacantando, Mauro
Equilibrium problems provide a mathematical framework which includes optimization, variational inequalities, fixed-point and saddle point problems, and noncooperative games as particular cases. This general format received an increasing interest in the last decade mainly because many theoretical and algorithmic results developed for one of these models can be often extended to the others through the unifying language provided by this common format. This survey paper aims at covering the main results concerning the existence of equilibria and the solution methods for finding them.<br /> |

Bellia, Marco and Occhiuto, Maria Eugena
FGCJ is a minimal core calculus that extends Featherweight Generic Java, FGJ, with lambda expressions for Java Simple Closures. It has been introduced to study, in a reduction semantics framework, properties of Java Simple Closures, including type safety and abstraction property. F is a source-to-source, translation rule system from Java 1.5 extended with lambda expressions, back to ordinary Java 1.5. It has been introduced to study, in a translation semantics framework, the design and the implementation features of lambda expressions, including simple closures, this transparency, non local variables and relations with anonymous class objects. In this paper we prove that the reduction semantics and the translation semantics commute in FGACJ. Where FGACJ is a minimal core calculus that extends FGCJ, by adding Java interfaces and anonymous class objects and that allows a restricted definition of translation semantics F. |

Bigi, Giancarlo and Passacantando, Mauro
This paper deals with equilibrium problems with nonlinear constraints. Exploiting the gap function recently introduced by the authors, which rely on a polyhedral approximation of the feasible region, we propose two descent methods. They are both based on the minimization of a suitable exact penalty function, but they use different rules for updating the penalization parameter and they rely on different types of line search. The convergence of both algorithms is proved under standard assumptions. |

Frangioni, Antonio and Furini, Fabio and Gentile, Claudio
The Perspective Reformulation (PR) of a Mixed-Integer NonLinear Program with semi-continuous variables is obtained by replacing each term in the (separable) objective function with its convex envelope. Solving the corresponding Perspective Relaxation requires appropriate techniques. Under some rather restrictive assumptions, the <i>Projected PR</i> can be defined where the integer variables are eliminated by projecting the solution set on the space of the continuous variables only. This approach produces a simple piecewise-convex problem with the same structure as the original one; however, this prevents the use of general-purpose solvers, in that some variables are then only implicitly represented in the formulation. We show how to construct an <i>Approximated</i> Projected PR whereby the projected formulation is "lifted" back to the original variable space, with the integer variables expressing one piece of the obtained piecewise-convex function; in some cases, this produces a reformulation of the original problem with exactly the same size and structure as the standard continuous relaxation but with a substantially improved bound. While the bound can be weaker than that of the PR, this approach can be applied in many more cases and allows direct use of off-the-shelf MIQP software; this is shown to be beneficial in different applications. In the process we also relax some of the other restrictive assumptions of the original development, such as the need for the objective function to be quadratic and the need for the left endpoint of the domain of the variables to be non-negative. |

Conti, Marco and Passarella, Andrea and Marzini , Emanuel and Mascitti, Davide
Opportunistic computing is an exciting evolution of opportunistic networks. The services provision in these networks suffer from the limitations of conventional MANETs such as non stable connections and limited distribution of knowledge between nodes. The introduction of services composition introduces new challenges for the definition of a system able to recognize the best alternative to choose from. The nodes in the network have heterogeneous physical characteristics, different mobility patterns and executes a variety of different services. During the connection, pairs of nodes may collaborate by exchanging information on the network status, such as characteristics of the encountered nodes and workload of their nodes. We propose a system which is capable of processing such information in order to offer the best alternative for a service request. The selection algorithms evaluate services composition by considering alternative paths. In this paper we analyze two types of knowledge distribution strategies to study sequential compositions in opportunistic networks. We propose an analytical model which is validated through a set of simulations, verifying the properties set. |

Bevilacqua, Roberto and Del Corso, Gianna M.
In this paper we study the structure of almost normal matrices, that is the matrices for which there exists a rank-one matrix $C$ such that $A^HA-AA^H=CA-AC$. Necessary and sufficient conditions for a matrix to belong to the class are given and a canonical representation as a block tridiagonal matrix is shown. The approach is constructive and in the paper it is explained how, starting from a $1\times 1$ or $2\times 2$ matrix we can generate almost normal matrices. Moreover, given an $n\times n$ almost normal matrix we can compute the block tridiagonal representation with a finite procedure.<br /><br /> |

Gorgone, Enrico
We propose a version of the (generalized) bundle scheme for convex nondifferentiable optimization suitable for the case of a sum-function where some of the components are "easy", that is, they are Lagrangian functions of explicitly known compact convex programs. This corresponds to a <i>stabilized partial Dantzig-Wolfe decomposition</i>, where suitably modified representations of the "easy" convex subproblems are inserted in the master problem as an alternative to iteratively inner-approximating them by extreme points, thus providing the algorithm with exact information about a part of the dual objective function. The resulting master problems are potentially larger and less well-structured than the standard ones, ruling out the available specialized techniques and requiring the use of general-purpose solvers for their solution; this strongly favors piecewise-linear stabilizing terms, as opposed to the more usual quadratic ones. This in turn may have an adverse effect on the convergence speed of the algorithm, so that the overall performance may depend on appropriate tuning of all these aspects. Yet, very good computational results are obtained in at least one relevant application: the computation of tight lower bounds for Fixed-Charge Multicommodity Min-Cost Flow problems. <br /><br /> |

Monreale, Anna and Wang, Wendy and Pratesi, Francesca and Rinzivillo, Salvatore and Pedreschi, Dino and Andrienko, Gennady and Andrienko, Natalia
Movement data are sensitive, because people’s whereabouts may allow re- identification of individuals in a de-identified database and thus can potentially reveal intimate personal traits, such as religious or sexual preferences. In this paper, we focus on a distributed setting in which movement data from individ- ual vehicles are collected and aggregated by a centralized station. We propose a novel approach to privacy-preserving analytical processing within such a dis- tributed setting, and tackle the problem of obtaining aggregated traffic information while preventing privacy leakage from data collection and aggregation. We study and analyze three different solutions based on the differential privacy model and on sketching techniques for efficient data compression. Each solution achieves different trade-off between privacy protection and utility of the transformed data. Using real-life data, we demonstrate the effectiveness of our approaches in terms of data utility preserved by the data transformation, thus bringing empirical evi- dence to the fact that the “privacy-by-design” paradigm in big data analytics has the potential of delivering high data protection combined with high quality even in massively distributed techno-social systems. |

Ruggieri, Salvatore
Parameterized linear systems allow for modelling and reasoning over classes of polyhedra. Collections of squares, rectangles, polytopes, and so on, can readily be defined by means of linear systems with parameters. In this paper, we investigate the problem of learning a parameterized linear system whose class of polyhedra includes a given set of example polyhedral sets and it is minimal. |

Serban, Tudor and Coppola, Massimo
We propose a model that uses a small set of quite simple parameters to devise a proper partitioning of the available data parallel tasks between CPU cores and GPU cores. The model takes into account both hardware and application dependent parameters. In particular, it eventually computes the percentage of tasks to be executed on CPU cores and GPU cores to achieve the better performance figures. Different experimental results on state-of-the-art CPU/GPU architectures are shown that assess the model properties. |

Galli, Laura and Letchford, Adam N.
<i>Quadratic Convex Reformulation</i> (QCR) is a technique that was originally proposed for 0-1 quadratic programs, and then extended to various other problems. It is used to convert non-convex instances into convex ones, in such a way that the bound obtained by solving the continuous relaxation of the reformulated instance is as strong as possible.<br />In this paper, we focus on the case of <i>0-1 quadratically constrained quadratic programs</i>. The variant of QCR previously proposed for this case involves the addition of a quadratic number of auxiliary continuous variables. We show that, in fact, at most one additional variable is needed. Some computational results are also presented. |

Bellia, Marco and Occhiuto, Maria Eugena
The last proposal for Java closures, as emerged in JSR 000335, is mainly innovative in: (1)Use of nominal types, SAM types, for closures; (2) Introduction of target types and compatibility for a contextual typing of closures; (3) Need for a type inference that reconstructs the omitted type annotations of closures and closure arguments. The paper provides a sound and complete type system, with nominal types, for such a type inference and discusses role and formalization of targeting and of compatibility in the designed inference process. |

Frangioni, Antonio and Galli, Laura and Scutellà, Maria Grazia
Real-time traffic with stringent Quality of Service requirements is becoming more and more prevalent in contemporary telecommunication networks. When maximum packet delay has to be considered, optimal delay-constrained routing requires not only choosing a path, but also reserving resources (transmission capacity) along its arcs, as the delay is a nonlinear function of both kinds of components. So far only simple versions of the problem have been considered in the literature where all arcs are reserved the same capacity (this is referred to as ERA, i.e., Equal Rate Allocations) and have the same capacity reservation cost, because in such a restricted case polynomial time exact algorithms can be devised, whereas the general problem is $\Mcnp$-hard. We first extend the polynomial-time approaches for the ERA version of the problem with unit arc costs by deriving a pseudo-polynomial time algorithm for the integer arc costs case and a FPTAS for the general arc costs case. We then show that, under the main latency models proposed in the literature, the general problem can be formulated as a mixed-integer Second-Order Cone (SOCP) program, and therefore solved with off-the-shelf technology. We compare two formulations: one based on standard big-M constraints, and an improved one where Perspective Reformulation techniques are used to tighten the continuous relaxation. Extensive computational experiments on both real-world networks and randomly-generated realistic ones show that the ERA approach is extremely fast and provides a surprisingly effective heuristic for the general problem whenever it manages to find a solution at all, but it fails for a significant fraction of the instances that the SOCP models can solve. We therefore propose a three-pronged approach that combines the fast running time of the ERA algorithm and the effectiveness of the SOCP models, and show that the combined approach is capable of solving realistic-sized instances at different levels of network load in a time compatible with real-time usage in an operating environment. |

Cappanera, Paola and Scutellà, Maria Grazia
The design of efficient Home Care Services is a quite recent and challenging problem motivated by the ever increasing age of population and the consequent need to reduce hospitalization costs. We are given a weekly planning horizon, a set of operators each characterized by a skill and a set of patients each requiring a set of visits also characterized by a skill. We propose an integrated model that jointly addresses the following problems: (i) the assignment of operators to patients guaranteeing the appropriate levels of skill; (ii) the scheduling of visits in the planning horizon; and (iii) the determination of a set of tours indicating the sequence of patients each operator must visit in every day of the week. Several variants of this model are investigated. All of them use the pattern as a key tool to formulate the problem. A pattern is an a priori given profile specifying a possible schedule for a given set of visits possibly characterized by differnt skills. Computational results on a set of real instances are analysed. They show that the selection of the pattern generation policy is crucial to solve large instances efficiently. |

Nanni, Mirco and Trasarti, Roberto and Monreale, Anna and Grossi, Valerio and Pedreschi, Dino
Customer segmentation is one of the most traditional and valued tasks in customer relationship management (CRM). In this paper, we explore the problem in the context of the car insurance industry, where the mobility behavior of customers plays a key role: different mobility needs, driving habits and skills imply also different requirements (level of coverage provided by the insurance) and risks (of accidents). In the present work, we describe a methodology to extract several indicators describing the driving profile of customers, and provide a clustering-oriented instantiation of the segmentation problem, based on such indicators. Then, we consider the availability of a continuous flow of fresh mobility data sent by the circulating vehicles, aiming at keeping our segments constantly up-to-date. We tackle a major scalability issue that emerges in this con- text when the number of customers is large, namely the communication bottleneck, by proposing and implementing a sophisticated distributed monitoring solution, which reduces the communications between vehicles and company servers to the essential. Finally, we validate the framework on a large database of real mobility data, coming from GPS devices of private cars. |

Bartoloni, Leonardo and Canciani, Andrea and Cisternino, Antonio and Morelli, Davide
Probability is permeating many applications of computer science, ranging from probabilistic reasoning to stochastic simulations. Therefore, researchers have started working on domain specific languages to target probabilistic computations, in order to support better understanding and development of probabilistic models. Among the proposed approaches sampling functions is one of the most promising: distributions are described as functional mappings from the unit interval (0,1] to probability domains which allows expressing a very broad class of distributions. The key advantage of this approach lies in its ability of lifting operations on values into operations on related distributions. The current state of the art frameworks, however, lack the ability to properly express variable correlation in a clean and composable way, which is a major issue of many real-world problems. In this paper we present LiXely, a probabilistic DSL which extends the sampling functions approach by providing explicit means for expressing variable correlation in a composable way and its implementation in F#. |

Calamoneri, Tiziana and Frangioni, Antonio and Sinaimeri, Blerina
A graph G = (V, E) is called a pairwise compatibility graph (PCG) if there exists an edge-weighted tree T and two non-negative real numbers dmin and dmax such that each leaf lu of T corresponds to a vertex u of V and there is an edge (u, v) in E if and only if dmin <= dT,w(lu, lv) <= dmax where dT,w(lu, lv) is the sum of the weights of the edges on the unique path from lu to lv in T. In this paper, we focus our attention on PCGs for which the witness tree is a caterpillar. We first give some properties of graphs that are PCGs of a caterpillar. We formulate this problem as an integer linear programming problem and we exploit this formulation to show that for the wheels on n vertices Wn, n = 7, ... , 11, the witness tree cannot be a caterpillar. Related to this result, we conjecture that no wheel is PCG of a caterpillar. Finally, we state a more general result proving that any pairwise compatibility graph admits a full binary tree as witness tree T. |

Cacchiani, Valentina and Galli, Laura and Toth, Paolo
In this tutorial, we give an overview of two fundamental problems arising in the optimization of a railway system: the Train Timetabling Problem (TTP) and the Train Platforming Problem (TPP). These problems correspond to two main phases that are usually optimized in close sequence. First, in the TTP phase, a schedule of the trains in a railway network is determined. A schedule consists of the arrival and departure times of each train at each (visited) station. Second, in the TPP phase, one needs to determine a topping platform and a routing for each train inside each (visited) station, according to the schedule found in the TTP phase. Due to the complexity of the two problems, an integrated approach is generally hopeless for real-world instances. Hence, the two phases are considered separately and optimized in sequence. Although there exist several versions for both problems, depending on the infrastructure manager and train operators requirements, we do not aim at presenting all of them, but rather at introducing the reader to the topic using small examples. We present models and solution approaches for the two problems in a didactic way and always refer the reader to the corresponding articles for technical details. |

Cioni, Lorenzo
In this Technical Report we present an iterative procedure for the evaluation, from a set of decsion makers, of a set of projects according to a given set of criteria. The proposed procedure uses Pareto principles and a Borda classical voting method and aimsat attaining fair allocations whenever this is possible. |

Monreale, Anna and Nanni, Mirco and Grossi, Valerio and Trasarti, Roberto and Pedreschi, Dino
In many emerging applications, such as real-time traffic monitoring, financial analysis, sensor network monitoring an important task is the continuos monitoring of stream data. In these contexts where large amount of data arrive continually the data processing requires to access often valuable personal information. As a consequence, the entire monitoring process could put at risk the privacy of people represented in the stream data. In this paper, we study the privacy issues in distributed systems during the monitoring of thresholds functions, where several nodes contribute with their data to the monitoring of a specific event. We provide a privacy-preserving framework suitable to find an acceptable trade-off among privacy protection, data quality and system performance. Using real-life data from GPS devices of private cars, we demonstrate the effectiveness of our approach in a case study consisting of the monitoring of customers mobility behaviors; in other words, we show how techniques for efficient communication can be used while preserving the individual privacy of the actors who are participating to the collective analysis. |

Bigi, Giancarlo and Passacantando, Mauro
In the last years many solution methods for equilibrium problems (EPs) have been developed. Several different monotonicity conditions have been exploited to prove convergence. The paper investigates all the relationships between them in the framework of the so-called abstract EP. The analysis is further detailed for variational inequalities and linear equilibrium problems, which include also Nash equilibrium problems with quadratic payoffs. |

Buono, Daniele and De Matteis, Tiziano and Mencagli, Gabriele and Vanneschi, Marco
This paper deals with high-performance Parallel Data Stream Processing methodologies and implementation techniques. Data Stream Processing (DaSP) is referred to in the most general and interesting sense: on-line (often real-time) applications working on multiple, nondeterministic streams, with unlimited or unknown length and highly variable arrival rate, whose elements must processed efficiently “on the fly”. Traditional high-performance solutions are not sufficient to meet the critical requirements of high throughput and low latency with acceptable memory size: typical DaSP applications require quite novel parallelism models, as well as related design and implementation techniques on the emerging highly parallel architectures. The aim of this paper is to give an original contribution to the design and implementation of parallel DaSP applications. The contribution is twofold: (1) the definition of an approach to a new general model for data-parallel DaSP computations according to a paradigm called Data Stream Parallelism, (2) the application of this approach to the parallel Stream Join problem, showing that the most interesting parallelizations in the literature are particular cases of our approach and that, compared to them, better throughput and latency are achieved by our implementation on multicore architectures. |

Cioni, Lorenzo
In this technical report we present a multicriteria method for a plurality of deciders. The method is designed as a decision aiding tool to allow the deciders to rank the alternatives of a given set A according to the criteria of the set C so to define the set of the best alternatives.<br /> |

Bigi, Giancarlo and Passacantando, Mauro
A new algorithm for solving equilibrium problems with differentiable bifunctions is provided. The algorithm is based on descent directions of a suitable family of D-gap functions. Its convergence is proved under assumptions which do not guarantee the equivalence between the stationary points of the D-gap functions and the solutions of the equilibrium problem. Moreover, the algorithm does not require to set parameters according to thresholds which depend on regularity properties of the equilibrium bifunction. Finally, the results of preliminary numerical tests on Nash equilibrium problems with quadratic payoffs are reported. |

Tahanan, Milad and van Ackooij, Wim and Frangioni, Antonio and Lacalandra, Fabrizio
The Unit Commitment problem in energy management aims at finding the optimal productions schedule of a set of generation units while meeting various system-wide constraints. It has always been a large-scale, non-convex difficult problem, especially in view of the fact that operational requirements imply that it has to be solved in an “unreasonably” small time; recently, the ever increasing capacity for renewable generation has strongly increased the level of uncertainty in the system, making the (ideal) Unit Commitment model a large-scale, non-convex, (stochastic, ro-bust, chance-constrained) program. We provide a survey of the literature on methods for the Uncertain Unit Commitment problem, in all its variants. We start with a review of the main contributions on solution methods for the deterministic versions of the problem, focussing on those based on mathematical programming techniques that are more relevant for the uncertain versions of the problem. We then present and categorize the approaches to the latter, also providing entry points to the relevant literature on optimization under uncertainty. |

**NOTE. **Starting January 2015 Technical reports can be inserted in the Open Access repository UnipiEprints.

Once published, they will be visible also in these pages. The old TR service is no longer maintained.

Data imported from UnipiEprints