• Reverse Turing Test (RTT): The “Turing Test (TT)” is a test to prove a machine’s ability to exhibit human-like behavior or intelligence. When such a test is administered by another machine, the problem is known as RTT, as opposed to TT. The goal of RTT is to see if a machine (or a machine learning model) can detect whether a given artifact is generated by a human or machine. I specifically focus on distinguishing text-generators. Also, see description for further understanding of RTT.

  • White-box Adversarial Robustness: In order to build image classifiers robust to adversarial attacks, we fused Bayesian Inference with traditional NN framework. This leads to a Bayesian Neural Network (BNN) which shows more robustness to adversarial attacks than traditional NNs. The hypothesis is that BNNs do not have the short-comings of traditional NNs that make it vulnerable adversarial attacks. These short-comings include: proneness to overfitting, making overconfident predictions, and needing regularization. Next, we evaluated the model with stong state-of-the-art white-box adversarial attacks and they performance results supported our hypothesis.

  • Comparing Differential Evolution Algorithm to Classical and Evolutionary Optimization: Compared the performance of Differential Evolution (DE), an evolutionary algorithm with classical optimization functions (i.e gradient descent, stochastic gradient descent, newton-raphson, quasi-newton) and another evolutionary algorithm, Particle Swarm Optimization (PSO). DE and PSO are also known as zero-order optimization algorithms because they do not require the objective function to be differentiable or continuous. Thus, we measure these optimization algorithms ability to minimize the loss function of test objective functions (i.e. Rosenbrock, Ackley, and Sphere) and a linear regression problem. We evaluated the performances with RSME (Root Mean Square Error) as the loss function, and time (seconds) for computational cost. The results suggest that DE is the best optimization algorithm for these tasks. However for the regression problem, while it achieves the lowest loss, it was significantly more computationally costly.

  • Black-box Adversarial Robustness: Used deep learning and computer vision techniques to perform experiments on satellite data. The goal of the project was to assess the robustness of the image classification model by using a nuanced black-box adversarial technique to attack the model. This project is classified under the Distribution Statement C which prevents me from giving any further details regarding the project.

  • Numerical Simulation of Vibrations of Mechanical Structures: This project required me to use Matlab, Matrix analysis and partial differential equations to model a 3D structure and to solve for the mass, stiffness and damping matrices. These were important because the goal of the research was to mitigate the effects of earthquakes in earthquake-prone areas by finding the appropriate damping constant. When the damping constant is applied to structures, it will allow the structures to withstand the effects of earthquakes. This resulted in my senior thesis

  • Hackathons:
    • Chrome extension: Procastination Avoidance tool that times out and gives you 3 arithmetic math questions to solve after an hour on Facebook. It is called ProcrastinationStation. You can test it out here.
    • Shiny project: Data Visualization tool