Daniel Mckenzie.

About me.

I am an Assistant Adjunct Professor (ie. a postdoc) within the Computational and Applied Mathematics group at UCLA . My primary advisor is Wotao Yin . I received my PhD from the University of Georgia in May 2019. My advisor was Ming-Jun Lai . Prior to that I was an undergraduate at University of Cape Town from where I also received my Master's degree. In between Cape Town and Athens, Georgia I spent a semester at the University of Bayreuth in Germany, working with Professor Fabrizio Catanese and his Lehrstuhl. You can find a copy of my CV here . On the right is a photo of me and my dad; I am the one on the left.

Research Interests.

I am interested in applied and computational mathematics. I enjoy using techniques from compressed sensing, graph theory and geometry to provide theoretically sound solutions to problems of practical interest in data science. Recently I have been thinking a lot about zeroth-order optimization.

Publications:

Note that *= undergraduate coauthor.
  1. Zeroth Order Regularized Optimization (ZORO): Approximately Sparse Gradients and Adaptive Sampling. Joint with HanQin Cai, Zhenliang Zhang and Wotao Yin.
    • Abstract: We study the problem of estimating the gradient of a function using only function values and compressed sensing. Using the proposed gradient estimator, we provide a zeroth-order optimization algorithm that is more efficient than the prior state-of-the-art when the objective function has approximately sparse gradients. We show experimentally that many real-world functions do indeed have such gradients.
    • Under review.
    • Arxiv version available here .
  2. SCOBO: Sparsity-Aware Comparison Based Optimization. Joint with HanQin Cai, Zhenliang Zhang and Wotao Yin.
    • Abstract: We link the problem of approximating the gradient of a function using only comparison based evaluations to one-bit compressed sensing.Similar to ZORO, we provide a gradient estimator and an optimization algorithm that exceeds the prior state-of-the-art when gradients are approximately sparse.
    • Arxiv version available here
  3. Balancing geometry and density: Path distances on high dimensional data. Joint with Anna Little and James Murphy.
    • Abstract: We explore further the use of power weighted shortest path distances for unsupervised learning on high dimensional data. We prove some nice results on fast computation of these distances. We examine more generally the interaction between data density and commonly used kernels in machine learning.
    • Arxiv version available here .
    • Under review.
  4. Who killed Lilly Kane? A case study in applying knowledge graphs to crime fiction. Joint with Mariam Alaverdian*, William Gilroy*, Veronica Kirgios*, Xia Li, Carolina Matuk*, Tachin Ruankriengsin*, Andrea Bertozzi and P. Jeff Brantingham.
    • Abstract: We construct a knowledge graph for season one of the TV show Veronica Mars. Analysis on this knowledge graph reveals interesting information about the characters, the crimes and the interaction between the two.
    • Product of the Summer 2020 (online) UCLA REU.
    • Arxiv version available here .
    • Accepted for presentation at the GTA3 workshop at IEEE Big Data 2020.
  5. Compressive Sensing for cut improvement and local clustering. Joint with Ming-Jun Lai.
    • Abstract: We show how one can phrase the cut improvement problem for graphs as a sparse recovery problem, whence one can use algorithms originally developed for use in compressive sensing. We use this new cut improvement approach as an algorithmic primitive to design new methods for local clustering and semi-supervised clustering.
    • Journal Refernce: SIAM journal on the Mathematics of Data Science (SIMODS).
    • Click here for the journal version and here for the arxiv version.
    • (MATLAB) code for this project available here .
    • Note this paper subsumes earlier versions, available here and here .
  6. Power Weighted Shortest Paths for Clustering Euclidean Data. Joint with Steven Damelin.
    • Abstract: We study the use of power weighted shortest path distance functions for clustering high dimensional Euclidean data, under the assumption that the data is drawn from a collection of disjoint low dimensional manifolds. We argue, theoretically and experimentally, that this leads to higher clustering accuracy. We also present a fast algorithm for computing these distances
    • Journal Reference: Foundations of Data Science, September 2019
    • Click here for the journal version and here for the arXiv preprint.
    • As part of this project we developed a novel, Dijkstra-like, algorithm for determining the k nearest neighbors of a Euclidean data point, with respect to any power weighted path distance or the longest leg path distance. Click here for the MATLAB code.

Teaching.

Winter 2021

Fall 2020

Spring 2020

Winter 2020

Fall 2019

Fall 2018

I am not teaching this semester.

Spring 2018

Resources.

Useful links:

Reference Letters

In general I am happy to write reference letters for undergraduates that I have taught or conducted research with. Before you ask me for a reference letter, here are some considerations:

Undergraduate Research

Due to research and teaching commitments, I am unable to supervise any undergraduate research projects at the moment.

For South African Undergraduates

If you are a South African undergraduate considering doing a PhD in mathematics in the United States, here is some general advice:

Contact.

  • You can reach me at [my surname]@math.ucla.edu. The email addresses [my surname]@math.uga.edu and danmac29[at]uga.edu will work until mid-2020.
  • Physical address: Office 5346, Mathematical Sciences Building, 520 Portola Plaza, Los Angeles, 90095