window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-69252271-1');

Rutgers University: Department of MSIS Seminar Schedule

All the seminars are held on Fridays at 11am unless otherwise noted. For more information, please contact Mert Gurbuzbalaban.

Seminar Organizer(s): Mert Gurbuzbalaban and Thomas Lidbetter.


Fall 2017:

  • Sep 22: Kazuhisa Makino (Kyoto University)
    • Time: 11am, Location: 1WP, Room 921; 100BRR, Room 4095
    • Video streaming available at: TBA
    • Title: Posimodular Function Optimization
    • Abstract: A function f: 2^V -> R on a finite set V is posimodular if f(X)+f(Y) >= f(X\setminus Y)+f(Y\setminus X) for all X,Y\subseteq V. Posimodular functions often arise in combinatorial optimization such as undirected cut functions. We consider the problem of finding a nonempty subset X minimizing f(X), when the posimodular function f is given by oracle access. We show that posimodular function minimization requires exponential time, contrasting with the polynomial solvability of submodular function minimization that forms another generalization of cut functions. On the other hand, the problem is fixed-parameter tractable in terms of the size of the image (or range) of f. In more detail, we show that Omega(2^{0.3219n} T_f) time is necessary and O(2^{0.92n}T_f) sufficient, where T_f denotes the time for one function evaluation. When the image of f is D={0,1, …,d},O(2^{1.271d}nT_f) time is sufficient and Omega(2^{0.1609d}T_f) necessary. We can also generate all sets minimizing f in time 2^{O(d)} n^2 T_f. Finally, we also consider the problem of maximizing a given posimodular function, showing that it requires at least 2^{n-1}T_f time in general, while it has time complexity Theta(n^{d-1}T_f) when D={0,1,…, d} is the image of f, for integer d.
  • Sep 26 (Tuesday): Khaled Elbassioni (Masdar Institute)
    • Time: 1pm, Location: 100BRR, Room 4095
    • Video streaming available at: 1WP, Room 921;
    • Title: Exact Algorithms for List-coloring of Intersecting Hypergraphs
    • Abstract: Given a hypegraph (a set family) H on a finite set V of vertices, a positive integer k, and a mapping L assigning to each vertex in V, a subset of {1,…,k} of “admissible colors”, a proper L-coloring of H is an assignment of colors to the vertices of H such that each vertex receives one color from its set, and no hyperedge is monochromatic. Given H and L, deciding if H is L-colorable is a well-known NP-hard problem which has received considerable attention in the literature. When every pair of hyperedges have non-empty intersection, we obtain a (likely) easier problem. More precisely, we show that the list-coloring problem for such intersecting hypergraphs with m hyperedges and n vertices can be solved in quasi-polynomial time (mn)^{o(k^2\log(mn))}.
  • Oct 10 (Tuesday): Paul Hand (Rice)
  • Time: 2pm, Location: 1WP, Room 921; 100BRR, Room 4095
    • Video streaming available at: TBA
    • Title: Deep Compressed Sensing
    • Abstract: Combining principles of compressed sensing with deep neural network-based generative image priors has recently been empirically shown to require 10X fewer measurements than traditional compressed sensing in certain scenarios. As deepgenerative priors (such as those obtained via generative adversarial training) improve, analogous improvements in the performance of compressed sensing and other inverse problems may be realized across the imaging sciences. In joint work with Vladislav Voroninski, we provide a theoretical framework for studying inverse problems subject to deep generative priors. In particular, we prove that with high probability, the non-convex empirical risk objective for enforcing random deep generative priors subject to compressive random linear observations of the last layer of the generator has no spurious local minima, and that for a fixed network depth, these guarantees hold at order-optimal sample complexity.
  • October 13: Tamas Terlaky (Lehigh)
    • Time: 11am,Location: 100BRR, Room 4095
    • Video streaming available at: 1WP, Room 921
    • Title: The Inmate Assignment and Scheduling Problem and its Application in the PA Department of Corrections
    • Abstract: The inmate assignment project, in close collaboration with the PA Dept. of Corrections, took five years from start to successful implementation. Our novel Inmate Assignment Decision Support System (IADSS) is designed with the main goal of simultaneously, and system-wide optimally, assigning the inmates to the correctional institutions. IADSS includes a new hierarchical multi-objective MILO model, which accurately describes the inmate assignment problem. This is the first time that OR methodologies have been used to optimize the operations, and built into the routine business practice, of a correctional system, thus it opens a rich and untouched area for the application of OR.
  • October 20: Patrick Johnstone (Rutgers)
    • Time: 11am, Location: 100BRR, Room 4095
    • Video streaming available at: 1WP, Room 921;
    • Title: Faster Subgradient Methods for Functions with Holderian Growth
    • Abstract: In this work we study the subgradient method when minimizing convex functions that satisfy a particular growth condition. This growth condition, known as Holderian growth, states that the function must grow at least as fast as the p-th power of the Euclidean distance to the closest minimizer, for some p no smaller than 1. Mathematically f(x)-f(x^*) >= c\|x-x^*\|^p where x^* is the minimizer of f which is closest to x and c is some constant. This condition is widespread and includes functions with quadratic growth (p=2) and functions with sharp minima (p=1) as special cases. Using this condition, we develop several new stepsizes for the subgradient method which obtain faster convergence rates than are given by the classical theory. In fact for the case p=1, our method obtains linear convergence to a solution, which is a surprising rate for a subgradient-type method.
  • October 27 : No seminar (Informs week)
  • November 3: Darinka Dentchava (Stevens)
    • Time: 11am, Location: 100 Rock #5038
    • Video streaming available at: 1WP, Room 1027
    • Title: Optimization problems with stochastic order constraints
    • Abstract: Stochastic orders formalize preferences among random outcomes and are widely used in statistics and economics. We analyze stochastic optimization problems involving stochastic-order relations as constraints that relate performance functionals, depending on our decisions, to benchmark random outcomes.
    • We discuss the relation of univariate and multivariate stochastic orders to utility functions, conditional value at risk, and to coherent measures of risk. Necessary and sufficient conditions of optimality and duality theory for problems with stochastic order constraints involve expected utility theory, dual (rank-dependent) utility theory, and coherent measures of risk. The model provides a link between various approaches for risk-averse optimization. Some attention will be paid to the numerical solution of the problems.  Several applications will be outlined.
  • November 10:  Arash Asadpour (NYU Stern)
    • Time: 11am, Location: 1WP, Room 921
    • Video streaming available at: 100BRR, Room 4095
    • Title: Concise Bidding: a Simple Strategy for Advertisers with Multiple Budget Constraints
    • Abstract: A major challenge faced by marketers attempting to optimize their advertising campaigns is to deal with budget constraints. The problem is even harder in the face of multidimensional budget constraints, particularly in the presence of many decision variables involved and the interplay among the decision variables through such constraints. Concise bidding strategies help advertisers deal with this challenge by introducing fewer variables to act on. In this talk, we discuss the problem of finding optimal concise bidding strategies for advertising campaigns with multiple budget constraints. We provide approximation algorithms based on ideas from dynamic programming and also a very fast dependent randomized rounding of the LP relaxation that runs in linear time and may be of independent interest.Our results do not rely on any structural assumption about the value or the cost functions in the traffic simulators (bid landscapes) provided by the search engines. Moreover, our results are not limited to cost-per-click models and (as opposed to uniform bidding strategies as the current state of the art) can apply to pay-per-impression or pay-per-conversion models, too. We accompany our theoretical results with extensive experimental findings that show an improvement between 1% to 6% over the state-of-the-art uniform bidding in case of one budget constraint (and up to 35% for multidimensional budget constraints), and an average increase of 5% over an enhanced version of uniform bidding designed for the problems with multiple budget constraints.
  • December 8:  Robert Schapire (Microsoft Research)
    • Time: 11am, Location: 1WP, Room 1027
    • Video streaming available at: 100BRR, Room 4095
    • Title: The Contextual Bandits Problem: Techniques for Learning to Make High-Reward Decisions
    • Abstract: We consider how to learn through experience to make intelligent decisions.  In the generic setting, called the contextual bandits problem, the learner must repeatedly decide which action to take in response to an observed context, and is then permitted to observe the received reward, but only for the chosen action.  The goal is to learn to behave nearly as well as the best policy (or decision rule) in some possibly very large and rich space of candidate policies.  This talk will describe progress on developing general methods for this problem and some of its variants.

Spring 2017:

  • Feb 24: No seminar due to Rutgers Research Day.
  • March 3: Nikhil Vellodi (NYU)
    • Time: 11am, Location: Newark 1WP-502
    • Video streaming available at: New Brunswick, 100RR-4095
    • Title: Cheap Talk in Multi-Product Bargaining
      Abstract: I study a game in which a buyer and a seller bargain over multiple products. The buyer has private valuations over the seller’s product range, and can communicate these through cheap-talk messages. In the benchmark model a fully revealing, welfare-maximizing equilibrium exists. However, a well-known refinement selects a non-empty set of partially informative equilibria, which I characterize fully. In particular, the set contains the ex-ante buyer optimal equilibrium. These results hold for a broad class of preferences, and rest on a natural trade-off faced by the buyer – reveal his type in order to secure his preferred good, or hide his type in order to secure a better price. As goods become less substitutable, the buyer reveals more information, whilst his share of the surplus changes non-monotonically.
  • March 10: Mengdi Wang (Princeton)
    • Time: 11am, Location: New Brunswick 100RR-3095
    • Video streaming available at: Newark – 1WP-302
    • Title: Stochastic First-Order Methods in Data Analysis and Reinforcement Learning
    • Abstract: Stochastic first-order methods provide a basic algorithmic tool for online learning and data analysis In this talk, we survey several innovative applications including risk-averse optimization, online principal component analysis, and reinforcement learning. We will show that convergence analysis of the stochastic optimization algorithms provide sample complexity guarantees for the corresponding online learning applications.
  • March 20 (Monday): Stan Uryasev (Florida)
    • Time: 11am , Location: TBA
    • Video streaming available at: TBA
    • Title: Buffered Probability of Exceedance (bPOE): Mathematical Properties and Applications
    • Abstract: (Joint work with Matt Norton and Alexander Mafusalov) The Probability of Exceedance (POE) is frequently used to measure uncertainties in outcomes. For instance, POE is used to estimate probability that assets of a company fall below liabilities. POE measures only the frequency of outcomes and ignores magnitude of outcomes. POE counts outcomes exceeds the threshold, and it “does not worry” about the amount by which each outcome exceeds the threshold. POE is lumping together all threshold exceedance events, potentially “hiding” quite large and very troublesome outcomes. Moreover, POE has poor mathematical properties when used to characterize discrete distributions of random values (e.g., when distributions are defined by observed historical data). POE for discrete distributions is a discontinuous function of control variables, making it difficult to analyze and optimize.This presentation discusses a new probabilistic characteristic called Buffered Probability of Exceedance (bPOE). With bPOE, it is possible to count outcomes close to a threshold value, rather than only outcomes exceeding the threshold. To be more precise, bPOE counts tail outcomes averaging to some specific threshold value. For instance, 4% of land-falling hurricanes in US have cumulative damage exceeding $50 billion (i.e., POE = 0.04 for threshold=$50 billion). It is estimated, that the average damage from the worst 10% of hurricanes is $50 billion. In terms of bPOE, we say bPOE=0.1 for threshold=$50 billion. bPOE shows that largest damages having magnitude around $50 billion have frequency 10%. bPOE can be considered as an important supplement to POE. We think that bPOE should be routinely calculated together with POE. This example shows that bPOE exceeds POE, which is why it is called Buffered Probability of Exceedance. The positive difference, bPOE-POE, can be interpreted as some “buffer.” The bPOE concept was recently developed as an extension of Buffered Probability of Failure (introduced by Rockafellar and Royset). bPOE has been derived from Conditional Value-at-Risk (CVaR) characteristic of uncertainty. Actually, bPOE is an inverse function of CVaR and it inherits a majority of the exceptional mathematical properties of CVaR (which is a so called “coherent measure of risk”). Similar to CVaR, minimization of bPOE can be reduced to convex and Linear Programming.We will discuss two applications of bPOE concept. The first application considers the Cash Matching of a Bond Portfolio. We minimize bPOE that assets exceed liabilities. The second application uses bPOE in data mining. Currently, the Area Under the Receiver Operating Characteristics Curve (AUC) is standardly used to evaluate classification models. AUC can be presented as the probability that some discrete random value is below zero. We explored so called Buffered AUC (bAUC) as a counterpart of the standard AUC.
  • March 24: Steve Alpern (Warwick Business School)
    • Time: 11am, Location: 1WP-1027 / 100RR-4095
    • Title: The Rendezvous Search Problem
    • Abstract: The rendezvous search problem asks how two (or more) unit-speed agents can minimize the expected time required to meet up when placed randomly in some search region, possibly a network. It was first posed informally in 1976 but did not receive attention in the literature until the 1990’s. It has since become an important problem in robotics, coordination theory and more recently in computer science. Many apparently simple problems are still unsolved.
  • March 31: Eugene Feinberg (Stonybrook)
    • Time: 11am, Location: TBA
    • Title: Recent Developments in Markov Decision Processes Motivated by Inventory Control
    • Abstract: The talk describes the recent progress in the theory of Markov Decision Processes (MDPs). This progress is motivated by inventory control applications of MDPs. The progress became possible because of recent generalizations of two facts in real analysis: Fatou’s lemma and Berge’s maximum theorem. In addition to inventory control applications, the talk describes new results on game theory and robust optimization.
  • March 31: Serhat Aybat (Penn State)
    • Time: 2pm, Location: 100R 5038, Video-conference: 1WP 921
    • Title: A distributed ADMM-like method for resource sharing under conic constraints over time-varying networks
    • Abstract: In this talk, a decentralized method for solving cooperative multi-agent optimization problems over a network of agents will be discussed, where only those agents connected by an edge can directly communicate with each other and the topology of the communication network may be time varying. The objective is to minimize the sum of agent-specific composite convex functions, i.e., each term in the sum is a private cost function belonging to an agent, subject to a conic constraint coupling all agents’ local decisions. An ADMM-like primal-dual algorithm is proposed to solve a saddle point formulation of the problem in a distributed fashion, where the consensus among agents is imposed on agents’ evaluations of the shadow price associated with the coupling constraint. We provide convergence rates in a) sub-optimality and infeasibility of agents’ local decisions, and b) consensus violation among agents’ shadow price estimates; examine the effect of underlying network topology on the convergence rate of the proposed decentralized algorithm; and show how to extend this method to handle directed communication networks. This optimization model abstracts a number of applications in machine learning, distributed control, and estimation using sensor networks. Joint work with Ph.D. student Erfan Yazdandoost Hamedani.
    • Bio: Necdet Serhat Aybat is an assistant professor in the Department of Industrial and Manufacturing Engineering at Pennsylvania State University. He received his Ph.D. degree in operations research from Columbia University in 2011. His research focuses on developing first-order algorithms for large-scale convex optimization problems from diverse application areas, such as distributed optimization, robust matrix decomposition, convex regression, and compressed sensing. His current research is supported by two NSF grants: one on designing optimization algorithms that simultaneously learn missing model parameters of a given parametric model, and the other one on designing smart power grids through distributed optimization.
  • April 14th: Melike Gursoy (Rutgers)
    • Time: 11am, Location: 100RR 3095, Videoconference at: 1WP 302
    • Title: Multi Server Queues and Task Completion Times in Random Environment
    • Abstract: We first present some queueing models with modulated arrival and service processes. We show that these models demonstrate the stochastic decomposition property, and derive the stationary distribution of the number in the system. Then, we investigate the task completion time for customers in such a random environment with generally distributed service and down times and exponentially distributed up times. We consider both the preemptive repeat different (Replace) and preemptive repeat identical (Repeat) service policy and obtain Laplace transform of the completion time distribution. We discuss the conditions under which the task completion time has heavy tail.
  • April 21st: Waheed Bajwa (Rutgers)
    • Time: 11am, Location: TBA
    • Abstract: TBA
  • May 5th: Lisa Hellerstein (NYU)
    • Time: 11am, Location: 1WP-1027 / 100RR-4095
    • Abstract: TBA