ECON 2002.01, The Ohio State University, Spring 2024
See Grades
See Course schedule
;tags: Teaching
]]>ECON 4002.01, The Ohio State University, Autumn 2023
See Grades
See Course schedule
;tags: Teaching
]]>ECON 4002.01, The Ohio State University, Summer 2023
See Grades
See Course schedule
;tags: Teaching
]]>ECON 2002.01, The Ohio State University, Spring 2023
See Grades
See Course schedule
;tags: Teaching
]]>Media Focus, Sentiments and Capital Allocation in Space
by Yu Wang and Zebang Xu
Midwest Economics Association Annual Meeting 2023
The Impact of FinTech on Discrimination in Mortgage Lending
by Daniel Shoag
Northeast Ohio Economics Workshop 2022
;tags: Discussion
]]>After changing all the corresponding code, like module
name and use
statement in the directory src/
, you need to go to build/
and update the cache.toml
file to the new module name.
;tags: Miscellaneous
]]>Due to the nature of LLVM, memory allocation in Julia can be very expensive. Especially in dynamic programming, we usually a lot of arrays to store our data. Therefore, tracking where the memory is allocated can fast computation a lot.
;tags: Miscellaneous
]]>ECON 4002.02, The Ohio State University, Autumn 2022
See Grades
See Course schedule
;tags: Teaching
]]>We can separate the definition of the desired point from the label of the point, e.g.:
\node[draw,fill=red,circle,inner sep=1pt] (x00) at ( $ (0, 0)$ ) {};
\node[below] at (x00) {$x_{00}(a_1, a_2)$};
(0, 0)
and where I should put the label on.
One can also use \coordinate
to define points, i.e., \coordinate[draw,fill=red,circle,inner sep=1pt] (x00) at ( $ (0, 0)$ );
\node[below] at (x00) {$x_{00}(a_1, a_2)$};
It seems that \coordinate
cannot have labels, so there’s no need to include empty {}
at the end of the \node
.
Define points first and give every point a reasonable name.
\draw (0, 0) to[bend right=40] (5, 5);
Can bend right
or bend left
, depends on from (0, 0)
to (5, 5)
or the other direction.
bend right=40
means the degree of bending.
Here I use “bezier curve” as example.
\draw (0, 0) to[bend right=40]
node[pos=0.2,draw,fill=red,circle,inner sep=1pt] (a) {}
(5, 5);
The pos
option in node
defines what fraction of this curve should I put a point on it.
This node
is defined as a
.
Let the arbitrary be a
.
\path (a); \pgfgetlastxy{\xcoord}{\ycoord};
\coordinate (a_x) at (\xcoord, 0);
\coordinate (a_y) at (0, \ycoord);
First, let the path
be on the point a
so that pgf
can remember it.
Second, \pgfgetlastxy
outputs the x-coordinate \xcoord
and y-coordinate \ycoord
of the last path, which we define as point a
.
Finally, we can define the a_x
and a_y
points for the corresponding coordinate points.
Here I use “bezier curve” as example.
\draw (0, 0) to[bend right=40]
node[pos=0.5,draw,fill=red,circle,inner sep=1pt] (a) {}
node[pos=0.51] (b) {}
(5, 5);
\draw[shorten >=-1cm, shorten <=-1cm, thick, red] (a) -- (b);
Instead of do the tangent line in a delicated way, I found out that just define two close point (a
and b
) and connect them together.
Notice that I didn’t draw the inner point at point b
.
When connecting two points, use negative number in shorten
to actually extend the line out.
Need to add \usetikzlibrary{decorations.pathreplacing}
in preamble.
%%% brace on up/right
\draw [decorate,decoration={brace,amplitude=4pt},xshift=0pt,yshift=3pt]
(a) -- (b) node [black,midway,yshift=.3cm] {\footnotesize $foo$};
%%% brace on down/left (mirror)
\draw [decorate,decoration={brace,amplitude=4pt, mirror},xshift=0pt,yshift=3pt]
(a) -- (b) node [black,midway,yshift=.3cm] {\footnotesize $foo$};
Need to modify xshift
and yshift
to micro-adjust the brace display.
Need to add \usetikzlibrary{intersections}
in preamble.
\documentclass[tikz, margin = 1mm]{standalone}
\usetikzlibrary{intersections}
\begin{document}
\begin{tikzpicture}
\draw[name path=a] (0, 0) to[bend right = 40] (2, 0);
\draw[name path=b] (0, -.5) to[bend left = 40] (2, -.5);
\path[name intersections={of=a and b, by=e}];
\node[draw,fill=red,circle,inner sep=1pt] at (e) {};
\end{tikzpicture}
\end{document}
Explanation:
name path=name
as argument to call this path.\path
to define the name intersections
. of
is to define the intersections between two paths, and by
defines the name of the intersection.\node
to draw the point as circle
. Can be other type.Simply use \tikzstyle
is suffice:
\begin{tikzpicture}
\tikzstyle{every node}=[font=\scriptsize]
...
\end{tikzpicture}
Need to add \usetikzlibrary{calc}
The syntax of calc
library is
\documentclass[tikz, margin = 1mm]{standalone}
\usetikzlibrary{calc, intersections}
\begin{document}
\begin{tikzpicture}
% The following figure shows how the golden section search separate the 2-D space.
\pgfmathsetmacro{\x}{5};
\pgfmathsetmacro{\y}{5};
\pgfmathsetmacro{\tau}{0.618};
\draw[-] (0, 0) -- (\x, 0) -- (\x, \y) -- (0, \y) -- (0, 0);
% define (a, b) points in both dimension
\coordinate[draw,fill=red,circle,inner sep=1pt] (x00) at ( $ (0, 0)$ );
\coordinate[draw,fill=red,circle,inner sep=1pt] (x01) at ( $ (\x, 0) $ );
\coordinate[draw,fill=red,circle,inner sep=1pt] (x10) at ( $ (0, \y)$ );
\coordinate[draw,fill=red,circle,inner sep=1pt] (x11) at ( $ (\x, \y) $ );
% use calc library to calculate the coordinate of the (c, d) points in both dimension
\coordinate[draw,fill=blue,circle,inner sep=1pt](c1) at ( $ (x00)!1-\tau!(x01) $ );
\coordinate[draw,fill=blue,circle,inner sep=1pt](c1mirror) at ( $ (x10)!1-\tau!(x11) $ );
\coordinate[draw,fill=blue,circle,inner sep=1pt](d1) at ( $ (x00)!\tau!(x01) $ );
\coordinate[draw,fill=blue,circle,inner sep=1pt](d1mirror) at ( $ (x10)!\tau!(x11) $ );
\coordinate[draw,fill=blue,circle,inner sep=1pt](c2) at ( $ (x00)!1-\tau!(x10) $ );
\coordinate[draw,fill=blue,circle,inner sep=1pt](c2mirror) at ( $ (x01)!1-\tau!(x11) $ );
\coordinate[draw,fill=blue,circle,inner sep=1pt](d2) at ( $ (x00)!\tau!(x10) $ );
\coordinate[draw,fill=blue,circle,inner sep=1pt](d2mirror) at ( $ (x01)!\tau!(x11) $ );
% draw dashed line to connect coordinates
\draw[dashed, name path = dashc1] (c1) -- (c1mirror);
\draw[dashed, name path = dashd1] (d1) -- (d1mirror);
\draw[dashed, name path = dashc2] (c2) -- (c2mirror);
\draw[dashed, name path = dashd2] (d2) -- (d2mirror);
% define the interior (c, d) points using coordinates
\path[name intersections={of=dashc1 and dashc2, by=y00}];
\node[draw,fill=orange,circle,inner sep=1pt] at (y00) {};
\path[name intersections={of=dashc1 and dashd2, by=y10}];
\node[draw,fill=orange,circle,inner sep=1pt] at (y10) {};
\path[name intersections={of=dashd1 and dashc2, by=y01}];
\node[draw,fill=orange,circle,inner sep=1pt] at (y01) {};
\path[name intersections={of=dashd1 and dashd2, by=y11}];
\node[draw,fill=orange,circle,inner sep=1pt] at (y11) {};
\end{tikzpicture}
\end{document}
;tags: Miscellaneous
]]>ECON 4002.01, The Ohio State University, Summer 2022
;tags: Teaching
]]>X220 Tablet is such a lovely laptop I bought. Although its CPU is pretty weak in the standard of 2022 (i7-2640M), it is still a pretty decent Linux laptop that even have touch screen and integrated stylus.
There is a serious question for this laptop though. Since the fan is dusted, and the thermopaste on the CPU are all dried out, the CPU temperture are constantly 90 degree of Celsius, which is always really hot.
Today I dissembled the whole laptop to clean the dust on the fan and repaste the thermopaste. I am mainly following this video for step-by-step tutorial.
First of all, it is easy to disassemble the keyboard by unscrew the screws locates at the back of the laptop.
The following image also include the tools I am using for this disassembly.
Another image for the closer look on how the palmrest and those cables. Notice that at this stage, we can see the fan is extremely dirty.
Next step is to remove the palmrest. Notice that although the both sides of the palmrest are easier to detach, it took some techniques to really take out the palmrest. Some angle is required to remove the palmrest from the front side of the laptop.
We can see the Wifi card at the right-buttom corner of the laptop. Zoom in,
We need to unscrew to take out the wifi card.
To take out the remaining plastic cover in the following image, I need to use the blue triangle plastic tools to stick into the connection of the cover and the buttom, and then detach the plastic cover.
Take out, we get
At this stage and remember to unscrew all necessary, we can take the whole motherboard out.
Clean the fan, repaste, and reassembly everything, I successfully decrease the CPU temperture to 60 degree.
It shows again that the importance of maintainence of the laptop and prevent it from overheating.
;tags: Miscellaneous Technology
]]>Joint work with Sungmin Park
paper; slide: MEA annual meeting; slide: NOE Workshop
Loan-to-value ratio (LTV) ceiling is a government policy that puts a cap on households’ mortgages relative to their house value, often intended to reduce booms in house prices. This paper studies the effects of this policy on house prices, using a simple two-period overlapping-generations model featuring within-generation inequality. In contrast to popular belief, we find that a strict (low) loan-to-value ratio ceiling raises long-run house prices, as lenders respond to the policy by substituting from mortgage lending to purchasing more houses. The policy’s positive effect on house prices is more severe with greater inequality. A strict ceiling is especially harmful to the poor. Taxes can only intensify the positive effect on house prices, although they can mitigate the adverse effects on welfare.
;tags: Working
]]>I study how much do financial frictions and endogenous capital partial irreversibility explain aggregate investment volatility. I propose a heterogeneous firms model with real and financial frictions. Firms adjust their capital stock by trading on the used capital market, thus the capital partial irreversibility is endogenized by the price in the market. This irreversibility creates two opposite forces affecting investment volatility: (1) capital investment is relatively cheaper in the recession, and thus attracts firms to invest in capital, dampening the fall of aggregate investment. (2) In the downturn, irreversibility of capital increases, makes investment become riskier, which exacerbate the fall of aggregate investment. In my model, collateral constraints amplify the first force and dampen the response of aggregate investment to match the moments in the data. A collateral constraint based on the resale value of the capital unevenly affect firms’ borrowing capacity by their upward-/downward-adjustment decision. Such decision determines a new price of investment goods, and thus endogenously alters the collateral constraint.
;tags: Working
]]>ECON 2002.01, The Ohio State University, Spring 2022
;tags: Teaching
]]>I have two goals to share with you in this blog post:
If you want to see the source code for this website, it is stored in huijunchen9260/websrc repository.
First, let me briefly talk about WHY I find blogit is static site generator for me despite there are tones of great and modern static site generators in the web.
Long story short, it is mainly because the conflict between my desire to customize everything to the way I like and my laziness to not really make such repeatable effort in terms of customization. On the one hand, decent blog post Write HTML in HTML shows probably the final stage of a Linux user who just cannot stop but customize everything in their website: write everything in HTML. However, I don’t want to write my own website in HTML. Probably I am not in that stage yet.
On the other hand, to avoid writing in HTML, most of the current static site generators are full of fancy themes and JavaScript. All the outcome of contemporary static site generators are great. They are dedicated designed to look great without any effort. To be honest, before making this website using blogit, I actually used Hugo to generate my website. If you are lucky and find a theme that do everything for you, and probably not a needy person as I do, then congratulations, the customization that confined by the author of themes works for you. Sadly, I AM a very needy person. I really want everything to be exactly as the way I want them to be, and such restriction made by the theme author just drives me crazy when I try to update my website. Eventually, I realize that the upfront cost to customize my website is too large, and I just don’t update my blog for like two years.
That is to say, I fall into a paradox. On one side of the spectrum, I really want to customize everything. On the other side, I really don’t want to pay the effort. That’s why I need to find a sweet spot between these two extremes.
Because it is simple.
It is simple enough for me to understand every detail, but do the heavy-duty for me. (still not every to be honest, that is exaggeration) blogit is nothing but just a Makefile which insert HTML tags into the Markdown file to make it become a HTML page. I don’t know what’s happening inside the brain of the author of blogit, but this is such a terribly tedious job. Thankfully, once it is done, it is so simple and magic. Every tags are precisely placed in corresponding webpages, and automatically generate a list of articles that every blog need.
Comparing with all other static site generators, which involves learning YAML
, TOML
, JS
and a lot of “settings” just confuses me.
Yes, that is true, and I am not satisfied with that.
A good personal website in my opinion should include individual pages for specific purposes.
For example, pages for Research, Teaching, and last link for Blog are necessary for a Ph.D. student website.
In order to achieve this, I rewrite the code to generate blog/index.html
in the Makefile to also generate blog/teaching.html
.
The logic to generate this page is to use the existing tags system:
grep
to find out this .md
files actually have Teaching
tag, and record them into a shell variable $WP
.Teaching
tag, then create a HTML with “Under Construction”.git
command to extract the modified date for each markdown file, and update DATE_EDITED
.
git log -1 --date="format:$(BLOG_DATE_FORMAT_INDEX)" --pretty=format:'%ad%n' -- "$$f"
git log
create a log file and enter a pager.git log -1
shows the log file on last commit (-1
part).--date
and format:
shows the format of date to show. I use %F
and result in the format %Y-%M-%D
.--pretty
and format:
shows the format for pretty print. %ad
outputs author date, and %n
means newline.Teaching
tag, generate the DATE
, URL
, TITLE
of the articles using git
, and correspond them into links in the Teaching webpage.The corresponding code block is also listed below:
blog/teaching.html: teaching.md $(ARTICLES) $(TAGFILES) $(addprefix templates/,$(addsuffix .html,header teaching_header article_list_header article_entry article_separator article_list_footer teaching_footer footer))
mkdir -p blog
TITLE="$(BLOG_TITLE)"; \
PAGE_TITLE="Teaching -- $(BLOG_TITLE)"; \
DATE_EDITED="$(shell git log -1 --date="format:$(BLOG_DATE_FORMAT)" --pretty=format:'%ad' -- "$<")"; \
export TITLE; \
export PAGE_TITLE; \
export DATE_EDITED; \
envsubst < templates/header.html > $@; \
envsubst < templates/teaching_header.html >> $@; \
markdown < teaching.md >> $@; \
f1=true; \
# grep for the markdown files with Teaching tag
for f in $(ARTICLES); do \
grep -qE "; *tags: .*Teaching.*" "$$f" && { "$$f1" && WP="$$f" || WP="$$WP $$f"; f1=false; }; \
done ; \
# If no files with Teaching tag, output Under Construction
[ -z "$$WP" ] && { \
echo "<h1>Under Construction</h1>" >> $@ ; \
envsubst < templates/teaching_footer.html >> $@ ; \
envsubst < templates/footer.html >> $@ ; \
exit ; \
} ; \
# If there's Teaching tag, find modified dates for all files and update DATE_EDITED
[ -n "$$WP" ] && { \
articleNewestDate="$$(for f in $$WP; do \
git log -1 --date="format:$(BLOG_DATE_FORMAT_INDEX)" --pretty=format:'%ad%n' -- "$$f"; \
done | sort -rk2 | head -n 1)"; \
tmpNewest=$$(echo $$articleNewestDate | tr -d '-'); \
tmpEdit=$$(echo $$DATE_EDITED | tr -d '-'); \
[ "$$tmpNewest" -ge "$$tmpEdit" ] && DATE_EDITED="$$articleNewestDate"
export DATE_EDITED; \
}; \
# IF there's file with Teaching tag, add hyperlink to each file on this Teaching page
[ -n "$$WP" ] && { \
first=true; \
echo "<h2>Teaching</h2>" >> $@ ; \
envsubst < templates/article_list_header.html >> $@; \
for f in $$WP; do \
printf '%s ' "$$f"; \
git log -1 --date="format:%s $(BLOG_DATE_FORMAT_INDEX)" --pretty=format:'%ad%n' -- "$$f"; \
done | sort -rk2 | cut -d" " -f1,3- | while IFS=" " read -r FILE DATE; do \
"$$first" || envsubst < templates/article_separator.html; \
URL="`printf '%s' "\$$FILE" | sed 's,^$(BLOG_SRC)/\(.*\).md,\1,'`.html" \
DATE="$$DATE" \
TITLE="`head -n1 "\$$FILE" | sed -e 's/^# //g'`" \
envsubst < templates/article_entry.html; \
first=false; \
done >> $@; \
envsubst < templates/article_list_footer.html >> $@; \
}; \
envsubst < templates/teaching_footer.html >> $@; \
envsubst < templates/footer.html >> $@
With some modification for each individual pages, I can systematically generate everything using just make build
, and directly publish on GitHub using make deploy
.
Updates 2022-02-10: added methods to add modified dates
;tags: Miscellaneous Technology
]]>This is my field paper in Ph.D. second year, Winter 2020 to Summer 2021.
I study the effects of labor tax, tax policy uncertainty and lump-sum transfer on wealth distribution in a modified Krusell-Smith model. Households face uninsurable idiosyncratic unemployment shocks as well as aggregate taxation shocks. Through this setup, I find that labor tax and lump-sum transfer increases the wealth inequality, while the tax policy uncertainty decreases it. Labor tax suppresses the major income sources of the poor: labor wage. The effect of tax policy uncertainty on the wealth distribution and inequality depends on households’ degree of risk aversion. When risk aversion is higher, the rich maintain similar levels of consumption by slowing down their capital accumulation process, while this relationship is reversed for the poor, resulting in decreasing wealth inequality. Lump-sum transfer serves as a insurance for idiosyncratic and aggregate shocks. Therefore, lump-sum transfer replaces part of the role of capital and reduces the incentive for the poor to accumulate capital. As a result, lump-sum transfer mitigates some effects of the uncertainty but worsens wealth inequality.
;tags: Working
]]>CSE 5523, The Ohio State University, Autumn 2021
Economics has a long tradition in processing data that holds a different view compared with Computer science. Applied Economists dig into data to find the correlation and causality between different factors, while Computer scientists have developed Machine Learning to predict out-of-sample data points. In other words, Economists try to interpret data in the past to understand the causality, while Computer scientists try to make future predictions based on the pattern existing in the data. This report aims to investigate the effectiveness of the recurrent neural network to replicate the interpretation power that linear regression can provide, and also the preciseness of the linear regression in out-of-sample prediction. My result shows that (1) linear interpretation is good within-sample interpretation but performs devastatingly in out-of-sample prediction, (2) trained with long enough epochs, the recurrent neural network can reach the interpretation power of linear regression, but maintaining good enough out-of-sample predictions power requires a high number of layers, and (3) Both GRU and LSTM performs generally better than SimpleRNN, and the design of reset gate in GRU can prevent noise from outliers, while the outcome generated by LSTM exacerbates with outliers.
;tags: Miscellaneous
]]>ECON 2002.01, The Ohio State University, Summer 2021
;tags: Teaching
]]>