First Interests

I knew I wanted to participate in research because I wanted to apply the science I learned in a textbook to a practical use. I have always been a hands-on person, and I knew that by practicing science, I would have a more fulfilling time during my college career. I decided to pursue this interest during the summer before my freshman year. I decided to apply to the CSEP program known as SIMS, the Summer Institute for Mathematics and Science. During this program, I was given a small and brief opportunity to observe and feel what undergraduate research is like. Afterward, I knew that I wanted to continue my undergraduate research experience but I was unsure as to where to begin. I eventually heard about the College of Creative Studies at UCSB. The College of Creative Studies offered their students mentorship under a UCSB faculty member to help and encourage them to enter and flourish in undergraduate research during their college career. After some mentorship from CCS faculty, I decided to look into a research fellowship to further aid me in my search of an undergraduate research lab. After I got accepted into the Early Undergraduate Research and Knowledge Acquisition program, EUREKA, I received additional aid in finding a research lab. Eventually, I found a spot in the Weimb’s lab under the mentorship of Dr. Torres. Now that I have been working in the research lab for several weeks, I hope that I continue to further develop my research and lab skills. During the EUREKA program, I hope to further develop my professional and networking skills. In addition to this, I also hope to become more adept in conducting presentations in a clear, concise, and easy to understand manner. I hope that eventually, I can be a contributing member in the field of science.

The Robot Revolution – Astronomy and Computers

For thousands of years humans have stared at the night sky, naming constellations, telling stories, and making observations about the light of distant stars. Yet, for the majority of that time, astronomers were reliant on what they could glean with their unaided eye. Without a telescope, only about 6,000 stars can be seen from Earth, and from one spot you could only see about a third of those (Bryson).  This is a small fraction of the 1×1024 stars that are estimated to exist. Since the invention of the telescope in the early 1600s, technological advances have gone hand-in-hand with observational astronomy, paving the way for astronomers to look further and create a clearer picture of our universe.

Before this summer, I had thought that observational astronomy consisted of a lone astronomer, or perhaps a team, travelling to be on site with a telescope and staying up all night to adjust the telescopes position and do their observations. Not too long ago, this wasn’t far from the truth. I’d seen images from the Hubble Space Telescope and some other photographs made by professionals and amateurs alike, yet I had no sense of the magnitude of technological advances that had been made in the field.

This summer I began work with the Supernova Group at Las Cumbres Observatory. Amazingly, Las Cumbres Observatory doesn’t actually do any observing on-site. Instead, they manage robotic telescopes around the world that don’t even require a scientist on-site to operate them. This came as a complete shock to me. As far as my role in all of this, I’m not sure quite what I expected, but it certainly wasn’t 8+ hours a day in front of a computer. For interested readers, my daily work schedule looks something like this:

8:30 am: Bike to work

9-5: Work at my computer

5:00 pm: Bike home

Exciting right? The first few days were grueling and frustrating. I had limited experience with programming and working at a computer all day was a big shift from attending classes and doing homework. Yet, the experience has grown on me. It is amazing how much we humans are capable of with a computer at hand.

My current job at the observatory is to create simulations for the new Large Synoptic Survey Telescope (LSST). The LSST will be one of the biggest telescopes in the world, with an aperture of 8.2 meters. (For some suggested names of future large telescopes see https://xkcd.com/1294/) In addition, LSST is completely automated, with preprogrammed directions of where to look during its 10 year survey. The telescope will take in 30 terabytes of data nightly (Lerner). In comparison, the entire NASA data set from 1955 to 2000 consisted of only 1 terabyte. There are not enough scientists in the world to sort through all this data manually (and I’m certainly glad they didn’t just decide to leave this job to the interns).

My goal is to take a known supernovae and pretend that if it were at a certain point in the sky on a certain day of the LSST’s survey. Then, try to answer the question of whether we would be able to find it again. The process of getting this code up and running has been an ordeal during which I’ve learned a lot about programming along with the science behind supernovae and the LSST. In the end I would like to be able to run 100,000 simulations for each kind of supernovae, totaling to nearly a million. Even my computer gets a bit tired out after that kind of task!

Supernova are notoriously difficult to spot, lasting only a short time, and nearly impossible to spot with the naked eye. In 1980, only one or two supernovae were discovered each year. With the advent of advanced telescopes and digital photography to record more than the human eye, this number increased to nearly 200 by 2000. As of 2012, astronomers are finding over 1000 supernovae per year (O’Brien).

Thankfully, with the billions of stars there are out there, astronomers are no strangers to big data. In fact, big data and astronomy have been going steady for a while now. However, we’re still looking for ways to improve how we can store and analyze this excess of data. Sometimes, new technology leads to great improvements in astronomy, and sometimes astronomy must push the advancement of technology.

Sources:

Bryson, Bill. “The Reverend Evans’s Universe.” A Short History of Nearly Everything. New York: Broadway, 2003. 33. Print.

O’Brien, Author Tim. “Supernova 2014J and the Upcoming Deluge of Discoveries.” Professor Tim O’Brien. N.p., 10 May 2014. Web. 08 July 2017.

Lerner, Preston. “July/August 2017.” Discover Magazine. Discover Magazine, 19 July 2011. Web. 08 July 2017.

Why Your Mindset is Important

Most people only really consider a few concrete factors when approaching a task or problem. They might think about how hard or how time-consuming something is, but most miss out on one of the most important aspects; their mindset. Transitioning from high school to college, I realized that I couldn’t just cruise by as easily as I had before; the classes were more difficult and the workload was larger. Later on, when I started working in a lab, I realized that similarly the style of work had changed. Going into lab I knew very little, and I had to learn a lot before I could be productive. During Eureka, one of the workshops we attended, presented by Claire Zedelius, was on the growth mindset. The growth mindset is the idea that talent and ability are gained mostly through experience and training. This is contrasted with the idea of a fixed mindset, which suggests that talent and ability are more fixed and innate. After attending the presentation I realized that I could connect these different mindsets to transitions between high school and college, and to starting research. While it is easy to fall into a fixed mindset over time, it is important to understand that not doing well immediately is not a reflection of your overall ability; rather, it is a sign of the need for more practice and knowledge. Looking back, I’ve realized that whenever I’ve faced a challenge of new content to master, I’ve had to accept that not all concepts and ideas are easy to learn, and that some require lots of work to understand. Difficulty is a natural part of the learning process, and if you constantly find yourself not facing any difficulties, it is a sign that you should push yourself further. This is the attitude I use for my work, and it is with this attitude that I plan to continue my academic career.

© 2016 JUSTIN SU. ALL RIGHTS RESERVED.

Sharing My First Research Conference Experience with The Highlanders

UC Riverside and I go way back. Ten years ago, I was an elementary school boy attending a cousin’s PhD graduation commencement at UC Riverside. Now, for the first time in ten years, I come back to UC Riverside to experience in what many researchers do yearly – presenting at a research conference.

SCCUR – Southern California Conferences for Undergraduate Research. It was their Fall Symposium, and I was eager to share the research I have done this summer. As I arrived, I was not anticipating any food to be catered until lunch, yet a simple breakfast was served. This and especially chugging down a cup of OJ were things I needed to kickstart the day.

After checking in, I sat in an auditorium filled with unfamiliar faces. Introducing myself to who I thought were strangers around me slowly became what was like conversations with my lab mates. We conversed about our research, scientific backgrounds, and undergraduate life. The hall gradually declined in volume as a SCCUR Board Member Dr. Jack Eichler welcomed us and officially commenced the conference. Dr. Susan Wessler, the plenary speaker, soon came up and gave a talk about her research on transposable elements. I was intrigued by learning that a big chunk of our genome consists mostly of these transposable elements that have no apparent use, yet research is finding out that they actually do. Her lab tries to decipher the uses of transposable elements, using some similar techniques which I surprisingly know of. That talk had definitely struck an accord with me, © 2016 JUSTIN SU. ALL RIGHTS RESERVED.instilling a drive within me to find out more about transposable elements and connect the dots to what I already know.

After listening to presenters give their talks and eating lunch with our two fellow Gorman Scholars, it was showtime. The poster was up; I was hydrated; and people started shuffling into the room. Having a spot near the entrance to the room definitely got many to take interest in my poster. I was enthused to share with everyone interested in my poster, especially those who knew a lot about microtubules. Whenever there was downtime, I would take the opportunity to learn what my neighbors’ projects were and what they researched. Overall, I was mainly busy throughout the entire session – introducing myself, running down key points throughout my project, and even networking with those around me. The environment itself was lively, yet so nostalgic considering this was my first research conference experience.

Reinforcing my point in my first blog post, you do get the recognition, the food, the drinks, and especially the connections. Driving away from UC Riverside was a bittersweet moment, where I felt happy that it happened and sad that it was over. I learned a lot from SCCUR, and I encourage any undergraduate researcher to experience presenting at a research symposium. In addition, this year’s research experience has been extremely educational, and it sure was a summer well spent. I truly thank CSEP for supporting me and my project this year, and I cannot wait to present at the next research conference that will have me.

An Introduction to Tidal Harmonic Analysis

You’re lying on the beach. You’re eyes are closed and the sun is warm. All is well. The oceans however, grow louder and louder, when suddenly a surge of water advances, and drenches you and all your belongings!

What may seem to be the ocean’s way at getting back at the humans who pollute its waters, is actually just the periodic ebb and flow of the ocean known as the tides.

In many aspects of oceanography, it is useful to separate data series such as temperature, velocity, pressure, etc… in terms of tidal and non-tidal components. For example, in my work for EUREKA, I am trying to evaluate changes in pressure (and relatedly sea level height) measured via a sensor placed on the ocean floor. I need to be able to discern changes in sea level height on the order of + 5 cm. This became a difficult proposition when I realized the sea level is constantly fluxuating on the order + 2 m multiple times a day!

If you are interested in the physical mechanisms that underlie the tides, I highly recommend the video below. For this post however, I will be focusing on the techniques oceanographers use to reduce tidal components of their data.

The Building Blocks:

Let’s acquaint ourselves with what a typical pressure signal looks like over a month long period. The blue line represents the pressure signal (measured in decibels) and the red line represents the average value of the signal over the month long period. The periodic nature of the graph easily implies a strong tidal component, although other periodic trends exist like wind forcing of the water due to a sea breeze, but no other periodic trends occur at the scale of tides in terms of consistency.

Graph 1

Our goal is to attempt to identify the tidal signal, and since it is periodic, it is a good idea to review our sines and cosines as they are useful in modeling periodic graphs.

Here is a simple sine function: y = sin(x) from 0 to 6 pi.

Graph 2

This graph is clearly periodic, but yet it doesn’t quite represent our pressure data. We can do better though! If we add some other periodic functions we will really start to see some resemblance between our pressure signal and the simple graph I created below.

Here y = sin(x) + cos(x) + sin(2x) + cos(2x) from 0 to 30 pi.

Graph 3

We can continue this process of adding up various sines and cosines until it resembles our pressure signal. In fact, mathematicians in the 18th and 19th century deduced that all periodic functions can be represented as the summation of sines and cosines.

Here is a link to a wonderful animation showing how even a couple sines and cosines can add up to look like a saw tooth!

http://bl.ocks.org/jinroh/7524988

Armed with the knowledge that any periodic function can be modeled as the summation of sines and cosines, we can in fact look at our pressure signal and determine what frequencies are present and the relative impact they have on the overall signal! Let’s not forget how powerful this tool is. Richard Feynman remarked, “It is easy to make a cake from a recipe; but can we write down the recipe if we are given the cake?” Joseph Fourier and his colleagues showed that we can have our cake, and determine its components too!

 

Breaking down the Tides, Constituent by Constituent:

If the moon orbited around the Earth in a perfect circle in the plane of the Earth’s equator and the sun were not present (A lot of assumptions!), a typical graph of a tidal signal may look like this:

Graph 4

The insight to be gained from looking at this graph is that the dynamics of our orbits with astronomical bodies influence the tides in a regular manner (i.e. at specific frequency). These specific frequencies are each given names. In the example above, it is called the M2 frequency. In the case where we now consider both the Moon and the Sun’s effects (S2) on our tides, our tidal graph may look like this:

Graph 5

Note the longer term periodic trend of the graph of about 2 weeks which corresponds with the alignment and mal-alignment of the sun and moon.

The M2, S2 and other frequencies are called constituents. They are further specified by the sum of various frequencies arising from planetary motion such as the rotation rate of the earth, the orbit of the moon around the earth and the earth around the sun, and periodicities in the location of lunar perigee, lunar orbital tilt, and the location of perihelion. (See References & Resources for additional info).

When analyzing the tidal components of our signal, anywhere from 5 – 60 constituents must be taken into account depending on the accuracy needed and the length of the raw data used. Once these tidal constituents are determined by methods of spectral analysis (See References & Resources), they are removed from the pressure signal, and a “de-tided” signal remains. This is called the harmonic method of tide analysis and was developed by Lord Kelvin and Sir George Darwin beginning in 1867. We can now evaluate the variations in pressure we care about with great precision!

The final product of de-tiding a pressure signal is shown below at Point Purisima (PUR). Note how small the variations in pressure are in the de-tided signal vs the raw pressure.

Graph 6

Graph 7

 

References & Resources

The Feynman Lectures on Physics: Volume 1 http://www.feynmanlectures.caltech.edu/I_50.html

What Physics Teachers Get Wrong about Tides!             PBS Digital Studios https://www.youtube.com/watch?v=pwChk4S99i4

Fourier Series Wikipedia https://en.wikipedia.org/wiki/Fourier_series

Harmonic Analysis and Prediction of Tides Stony Brook University http://www.math.stonybrook.edu/~tony/tides/harmonic.html

Classical tidal harmonic analysis including error estimates in MATLAB using T_TIDE (Pawloicz et. Al) 

Note: I use T_TIDE to de-tide my data.

http://www.omg.unb.ca/Oceano/fundy_tides/T_Tide_CompAndGeo.pdf