# Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

## Markdown

This is a page not in th emain menu

## Future Blog Post

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

## Blog Post number 4

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

## Blog Post number 3

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

## Blog Post number 2

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

## Blog Post number 1

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

## Portfolio item number 1

Short description of portfolio item number 1

## Portfolio item number 2

Short description of portfolio item number 2

## Power weighted shortest paths for clustering Euclidean data

Published in Foundations of Data Science, 2019

We study the use of power weighted shortest path distance functions for clustering high dimensional Euclidean data, under the assumption that the data is drawn from a collection of disjoint low dimensional manifolds. We argue, theoretically and experimentally, that this leads to higher clustering accuracy. We also present a fast algorithm for computing these distances

Recommended citation: McKenzie, Daniel and Damelin, Steven. (2019). "Power weighted shortest paths for clustering Euclidean data. " Foundations of Data Science. 1(3). https://arxiv.org/abs/1905.13345

## Compressive sensing for cut improvement and local clustering

Published in SIMODS, 2020

We apply tools from compressive sensing to the problem of finding clusters in graphs.

## Who killed Lilly Kane? A case study in applying knowledge graphs to crime fiction.

Published in GTA3 workshop at IEEE Big Data, 2020

We construct and analyze a knowledge graph for season one of the TV show Veronica Mars.

## A one-bit, comparison-based gradient estimator.

Published in (under review), 2021

We use tools from one-bit compressed sensing to construct a new algorithm for comparison-based optimization.

## Zeroth Order Regularized Optimization (ZORO): Approximately sparse gradients and adaptive sampling.

Published in SIOPT (to appear) , 2021

We propose a new zeroth-order optimization algorithm that uses compressive sensing to approximate gradients.

## A zeroth-order block coordinate descent algorithm for huge-scale black-box optimization.

Published in ICML , 2021

We propose a new algorithm for ultra-high dimensional black-box optimization (over 1 million variables).

## Learn to predict equilibria via Fixed Point Networks.

Published in (under review), 2021

We apply the FPN technology developed in an earlier work to the problem of predicting Nash equilibria in parametrized games.

## Curvature-Aware Derivative-Free Optimization.

Published in (under review), 2021

We propose a line search algorithm for zeroth-order optimization with low query complexity, both in theory and in practice.

## Balancing geometry and density: Path distances on high dimensional data.

Published in SIMODS (to appear) , 2021

We further explore the use of shortest path metrics on high dimensional data.

## JFB: Jacobian-Free Backpropagation for Implicit Networks.

Published in AAAI , 2022

We propose a new, much faster, kind of backprop for implicit-depth neural networks.

## Cut Improvement and Clustering using Compressive Sensing.

Published:

Based on this paper. Here are the slides. This talk has been given virtually and in person at several other venues.

## Learning to predict Nash equilibria from data.

Published:

Based on this and this paper. Here are the slides. Similar versions of this talk were/ will be given at the ZiF Mathematics of Machine Learning conference and in the Mean-field games and optimal transport seminar.

## A zeroth-order block coordinate descent algorithm for huge-scale black-box optimization.

Published:

Based on this paper. Here are the slides. This short ICML talk was actually presented by Yuchen Lou.

## Precalculus

Undergraduate course, University of Georgia, 2014

( 2014–2019 ) While a graduate student at UGA I taught Math1113, a one semester precalculus course, six times. The textbook we used was Precalculus by Julie Miller and Donna Gerkin. For some sections we used the ALEKS homework system. You can find a sample syllabus here and a copy of my first day of class slides here.

## Calculus 1

Undergraduate course, University of Georgia, 2015

( 2014–2019 ) While a graduate student at UGA I taught Math2250, a one semester Calculus 1 course, three times. We used the textbook University Calculus, Early Transcendentals by Hass, Weir and Thomas. We used the WebWork homework system. I experimented with a variety of teaching modalities but one thing I found to be effective was creating a worksheet for every lesson which we would start in class and students would finish at home. You can find an example of such a worksheet here. You can find a copy of the syllabus here.

## Calculus 3

Undergraduate course, University of California, Los Angeles, 2019

Math 32A is a large (~200 students), one-quarter course on multivariable calculus. Teaching Math 32A was an interesting experience as it involved giving auditorium-style lectures as well as managing a grader and three TA’s who met with the students in smaller groups. You can find a copy of the syllabus here. The reviews were mostly favorable.

## Mathematics of Data Theory

Undergraduate course, University of California, Los Angeles, 2020

( 2019–2021 ) At UCLA I co-developed and then taught (three times) Math 118, an introduction to the mathematics of data science. I try to emphasize both theory and practice, so some lectures are slide-based presentations while others are more interactive and use Jupyter notebooks to play around with algorithms. I am happy to share my complete set of lecture slides, but not yet willing to make them completely public, so email me if you would like access.

## Introduction to Statistics

Undergraduate course, University of California, Los Angeles, 2020

Math 170S is a mathematically rigorous introduction to statistics, using Hogg, Tanis and Zimmerman’s Probability and Statistical Inference . This was the first course I taught entirely over Zoom.

## Honors Applied Numerical Methods

Undergraduate course, University of California, Los Angeles, 2021

At UCLA I co-developed Math151AH and Math151BH, honors versions of the pre-existing applied numerical methods classes. This two course sequence examines the theory and implementation of algorithms to solve fundamental problems in numerical analysis, for example least squares and SVD decompositions. We used this textbook. Here are the syllabi for Math151AH and Math151BH, and here is a sample lecture on one of my favorite algorithms, the power method.