Seminars will take place in Room 4B96, Cottrell Building, University of Stirling. Normally, from 15.00 to 16.00 on Friday afternoons during semester time, unless otherwise stated. For instructions on how to get to the University, please look at the following routes.
|Dr Una-May O'Reilly
Principal Research Scientist
AnyScale Learning For All (ALFA)
MIT Computer Science and
Artificial Intelligence Laboratory, US
|STEALTH: Modeling Coevolutionary Dynamics of Tax Evasion and Auditing. STEALTH is an AI system that detects tax law non-compliance by modeling the co-evolution of tax evasion schemes and their discovery through tax department audits. Tax evasion accounts for billions of lost income each year. When the tax department pursues a tax evasion scheme and changes the tax law or audit procedures, the tax evasion schemes evolve and change into an undetectable form. The arms race between tax evasion schemes with tax authority actions presents a significant challenge to guide and plan enforcement efforts.
|Dr Oana Andrei
School of Computing Science
University of Glasgow
|Probabilistic Formal Analysis of App Usage to Inform Redesign. Good design of mobile apps is challenging because users are seldom homogeneous or predictable in the ways they navigate around and use the functionality presented to them. Recently we set out a process of app analysis intended to support understanding of use but also redesign using probabilistic model checking. In this talk I will show how to infer admixture models of activity patterns from various time cuts of app usage logs, characterise the activity patterns by probabilistic temporal logic properties using model checking, and compare the admixture models longitudinally and structurally. I will illustrate this work via a case study of a mobile app presenting analytic findings and discussing how they are feeding into redesign. We had posited that two activity patterns indicated two separable sets of users, each of which might benefit from a differently tailored app version, but our subsequent analysis detailed users’ interleaving of activity patterns over time – evidence speaking more in favour of redesign that supports each pattern in an integrated way. We uncover patterns consisting of brief glances at particular data and recommend them as possible candidates for new design work on widget extensions: small displays available while users use other apps.
|Dr Junaid Qadir
Information Technology University
Punjab, Lahore, Pakistan
|Using Big Data for Development. With the explosion of social media sites and proliferation of digital computing devices and Internet access, massive amounts of public data is being generated on a daily basis. Efficient techniques/ algorithms to analyze this massive amount of data can provide near real-time information about emerging trends and provide early warning in case of an imminent emergency (such as the outbreak of a viral disease). In addition, careful mining of these data can reveal many useful indicators of socioeconomic and political events, which can help in establishing effective public policies. The emerging ability to use big data techniques for development (BD4D) promises to revolutionalize healthcare, education, and agriculture; facilitate the alleviation of poverty; and help to deal with humanitarian crises and violent conflicts. The focus of this talk will be on reviewing the application of big data analytics for the purpose of human development. Besides all the benefits, the large-scale deployment of BD4D is beset with several challenges due to the massive size, fast-changing and diverse nature of big data. The most pressing concerns relate to efficient data acquisition and sharing, establishing of context (e.g., geolocation and time) and veracity of a dataset, and ensuring appropriate privacy. We will review existing BD4D work and will also highlight important challenges and open issues.
|Mid Semester Break
||Mid Semester Break|
|Dr Stefano Beretta|
Dept. Informatica, Sistemistica e Comunicazione,
Universita' degli Studi di Milano - Bicocca, Italy.
|Approaches for Haplotype Assembly via Minimum Error Correction. Diploid organisms, such as humans, have two copies of each chromosome inherited from the two parents. In this context, one of the main problems is the assignment of groups of genetic variations, such as Single Nucleotide Polymorphisms (SNPs), to the correct copy of the chromosome, that is, to compute the two haplotypes. A haplotype is a sequence of genetic variants, and the problem of their reconstruction starting from a set of input reads covering the set of SNP positions is known as haplotype assembly. The solution of the problem consists of computing two haplotypes, by assigning the input read (creating a bipartition of the reads) with the minimum number of corrections to the positions corresponding to the SNPs, which are due to sequencing error or misplaced alignments. The formulation of this problem is called Minimum Error Correction (MEC) and has proved to be NP-hard. This talk will present two recently developed approaches for solving this problem, both relying on Fixed-Parameter Tractable (FPT) algorithms: WhatsHap and HapCol. The key point of these two approaches is that the algorithms are exponential only in the coverage, that is the maximum number of read per SNP, making them suitable for processing data coming from the emerging sequencing techniques.|
|Dr Jessica Enright
Lecturer in Mathematics
Computing Science and Mathematics
University of Stirling
|Deleting edges to restrict the size of an epidemic. Motivated by applications in network epidemiology, we consider the problem of determining whether it is possible to delete at most k edges from a given input graph (of small tree-width) so that the maximum component size in the resulting graph is at most h. While this problem is NP-complete in general, we provide evidence that many of the real-world networks of interest are likely to have small tree-width, and we describe an algorithm which solves the problem in time O((wh)2wn) on an input graph having n vertices and whose tree-width is bounded by a fixed constant w.
|Dr Juliana K F Bowles|
School of Computer Science University of St Andrews
|Logic-based approaches for modelling and detecting conflicts in the treatment of chronic conditions. This talk shows how two different logic-based approaches can be used to (1) detect conflicts in the treatment of multiple chronic conditions, and (2) modelling cancer and tumour growth.|
(1): Looking initially at how modern complex systems are designed by modelling different aspects of the system separately, we developed an automated approach focusing on the composition of (static or behavioural) design models via constraint solvers. In this approach, we generate a number of logical constraints characterising different models and use an automated solver to find a solution (if existing) that produces the composed model. We discuss that this approach is directly applicable to the detection of conflicts in clinical pathways.
Clinical pathways are care plans which detail essential steps in the care of patients with a specific clinical problem, usually a chronic disease. A pathway includes recommendations of medications prescribed at different stages of the care plan. For patients with three or more chronic diseases (known as multimorbidities) the multiple pathways have to be applied together. One common problem for such patients is the possible adverse interaction between medications given for different diseases. In this context, we have used an optimising SMT solver (Z3) to quickly find the set of medications with the minimal number and severity of conflicts which is assumed to be the safest.
(2) Automata can be used to capture cancer evolving through a (discrete finite) sequence of progressive stages called phenotypes. Automata consist of states (known as hallmarks of cancer) and transitions between states, indicating a progression or regression of the cancer. We haved explored extensions and combinations of different variants of timed automata and associated tools to model and analyse a model of the disease in different ways. We combine patient information and comorbidities with the cancer automaton through composition. We have explored the use of model checking as an analysis technique to provide further insights into the effectiveness of treatment plans for a given patient, and how these could potentially inhibit or slow down the progression of cancer.
|Dr Colin Johnson|
Reader / Associate Dean of Sciences (Graduate Studies)
School of Computing
University of Kent
|Theories Destroy Facts (as Programs Execute). Computer programs execute step-by-step. Most computational processes are goal-directed, i.e. they have a specific task to solve. In a correct program, each of these steps should therefore contribute towards solving the task. This talk will explore some ways in which we can quantify this progress of a program/process towards a goal. This will use two main ideas: one is the idea of information-theoretic similarity, realised using the idea of compression distance; the other is using machine learning, and measuring the complexity of models generated. This will be applied to two specific problems: finding the location of bugs in code, and synthesising code from examples.|
Top image: Illustrated example of running the Epsilon-constraint algorithm in order to maximise two objectives: find an optimal solution for objective 1; restrict the solution space according to the solution's value for objective 2 and look for an optimum solution of objective 1 in that space; repeat the previous step until there are no more solutions to be found. Any dominated solutions need to be filtered out of the set of solutions.
Courtesy of Dr. Nadarajen Veerapen. Related to a recent publication:
N. Veerapen, G. Ochoa, M. Harman and E. K. Burke. An Integer Linear Programming approach to the single and bi-objective Next Release Problem. Information and Software Technology, Volume 65, September 2015, Pages 1-13, ISSN 0950-5849. DOI:10.1016/j.infsof.2015.03.008
Last Updated: 07 April 2016.