My Erlang Tips
1. Creating a list
It is important to ensure to attach the new elements to the beginning of the list. (see: List Handling).
2. High order function
- Clojure, it is very useful because it lets us carry around parameters out of its original context.
Anonymous function(lambda function)
[spawn(fun (X) -> timer:sleep(10), io:format("~p~n", [X]) end) || X <- lists:seq(1, 10)]. % wrong [spawn(fun () -> timer:sleep(10), io:format("~p~n", [X]) end) || X <- lists:seq(1, 10)]. % right
Here, we could need to make lambda/0 instead of lambda/1 in list comprehension.
3. Different ways to spawn and passing argument
Using
spawn/3
erlang:spawn(?MODULE, function, [arg1, arg2, arg3]).
This solution need function to be exported.
Use
spawn/1
spawn(fun () -> function(arg1, arg2, arg3) end).
Use
apply/2
inspawn/3
%% Function Ref. to a local (non-exported) function Function = fun function/arity, Args = [arg1, arg2, arg3], spawn(erlang, apply, [Function, Args]).
The advantage of
apply
isFunction
name could be computed dynamically.
4. spawn_link
It is an atomic operation for
spawn
+link
.
spawn_link(Mod, Func, Args) -> %% ==should be protected by atomicity == Pid = spawn(Mod, Func, Args), link(Pid), %% ================================== == Pid.
DO NOT USE THIS.
Its argument is a function just like argument of
spawn
.
link(spawn(error_and_process, chain, [3])). %% If we want to use spawn_link to replace link %% We have to put spawn into lambda function spawn_link(fun() -> linkmon:chain(3) end).
5. Different behaviours around throws,errors and exits
5.1. Exit signals
- What is a exit signal?
Except normal message passing, a process could send signal to other processes to let them know it is exiting. So, it is a special kind of message. Exit signal contains exit reason, so other processes could decide how to react. - Processes are linked using exit signals
A process could be linked with another process by subscribe to that process’s exit signal. All linked processes receive the exit signal from the terminating process. - What does it mean by a process trapping exit?
When a process traps exit (signals), the exit signals that are received from links will be converted into messages which are then put inside the mailbox. How to make a process trap exit?
A normal process can trap exit signal by converting itself into system process.
process_flag(trap_exit, true)
A system process can handle exit signals because exit signals will be converted into message. So, system process could just use
receive
to handle them.
- What are the different reason for a process to terminate?
- A normal exit
- Unhandled errors
- Killed
- A normal exit
5.2. Exit
- Exception source:
spawn_link(fun() -> ok end)
- Untrapped result: - nothing -
- Trapped result: {’EXIT’, <0.61.0>, normal}
- The process exited normally, without a problem.
- Untrapped result: - nothing -
- Exception source:
spawn_link(fun() -> exit(reason) end)
- Untrapped result: ** exception exit: reason
- Trapped result: {’EXIT’, <0.55.0>, reason}
- The process has terminated for a custom reason. If there is no trapped exit (using system process), the process crashed.
- Untrapped result: ** exception exit: reason
- Exception source:
spawn_link(fun() -> exit(normal) end)
- Untrapped result: - nothing -
- Trapped result: {’EXIT’, <0.58.0>, normal}
- This emulates a process terminating normally.
- Untrapped result: - nothing -
5.3. Error
- Exception source:
spawn_link(fun() -> 1/0 end)
- Untrapped result: Error in process <0.44.0> with exit value: {badarith, [{erlang, ’/’, [1,0]}]}
- Trapped result: {’EXIT’, <0.52.0>, {badarith, [{erlang, ’/’, [1,0]}]}}
- The error is not caught by
try...catch
block and bubbles up into an ’EXIT’. - It behaves the same as
exit(reason)
, with a stack trace giving more details about what happend.
- Untrapped result: Error in process <0.44.0> with exit value: {badarith, [{erlang, ’/’, [1,0]}]}
- Exception source:
spawn_link(fun() -> erlang:error(reason) end)
- Untrapped result: Error in process <0.47.0> with exit value: {reason, [{erlang, apply, 2}]}
- Trapped result: {’EXIT’, <0.74.0>, {reason, [{erlang, apply, 2}]}}
- Same as case 1) of Error.
erlang/error/1
is meant to allow you to do just that.
- Untrapped result: Error in process <0.47.0> with exit value: {reason, [{erlang, apply, 2}]}
5.4. Throw
Exception source: spawn_link(fun() -> throw(rocks) end)
- Untrapped result: Error in process <0.51.0> with exit value: {{nocatch, rocks}, [{erlang, apply, 2}]}
- Trapped result: {’EXIT’, <0.79.0>, {{nocatch, rocks}, [{erlang, apply, 2}]}}
throw
is not caught bytry...catch
, it bubbles up into an error –> an EXIT.
5.5. Kill signal
- Generated with
exit/2
, like a gun. It allows a process to kill another one from a distance. - When using
kill
exit signal, it is used to forcefully terminate the receiving process even if it is trapping exits. - When a process is killed with
kill
, it will emit exit signalkilled
to other linked process.
5.6. Summary
- Exit signals with reason will terminate the receiving process unless it is trapping exits. In which case, it can convert exit signals into normal messages and be handled by
receive
. normal
exit signals are harmless.
- When untrapped, they are ignoreed by receiving process.
- When trapped, they are received as messages.
- If it is sent by self, it will cause the process to terminal with a
normal
exit reason.
- When untrapped, they are ignoreed by receiving process.
kill
exit signals always terminate the receiving process even if it is trapping exits.
6. What is the usage of register?
Process communication is done via Pid, but it is only available to the parent who spawn the process. If we want other processes to communicate to it, we have to pass that Pid around as extra argument.
It has advantage such that if you don’t reveal the PID of a process, other processes can not interact with it in anyway.
Its disadvantage is not convenient. register(Name, Pid)
is a method for publishing a process identifier so that any process in the system can communicate with this process. Therefore, The atom Name
associated with Pid
can be used globally.
7. Different form of pattern matching
Function signature/evaluation
area({rectange, Width, Height}) -> Width * Height; area({square, Side}) -> Side * Side; area({circle, Radius}) -> 3.1415926 * Radius * Radius.
Something similar with Haskell, OCaml doesn’t have this.
=
is actually a pattern matching operator.
Person={person,{name,joe,armstrong},{footsize,42}}. {_,{_,Who,_},_} = Person.
Receive block for capturing incomiing messages to a process
postman() -> receive {send, {Fr, To, _Content} = Pkg} -> io:format("~s sending something to ~s ~n", [Fr, To]), self() ! {recv, Pkg}, postman(); {recv, {To, Fr, Content} = _Pkg} -> io:format("~s got a ~s from ~s~n", [Fr, Content, To]), postman(); stop -> io:format("Shutting down postman ...~n") end.
Notice, the pattern matching with
=
.
Case block
demote(Arg) when is_tuple(Arg) -> case Arg of {ceo, male} -> "Fire the CEO"; {ceo, female} -> "Fire the CEO"; {ceo, _} -> "Cannot fire non existing CEO"; _ -> "unknown pattern" end.
It is very similar with OCamls
match x with
let two_or_four l = match l with | _::_::[] -> true | _::_::_::_::[] -> true | _ -> false ;;
In a try expression
is_even(Number) -> Type = try Number rem 2 of 0 when is_number(Number) -> true; 1 when is_number(Number) -> false catch _ErrType:_Err -> "I can't tell" end, io:format("Is ~p even? ~p~n", [Number, Type]).
It is also similar with
Case
block.
8. How to use monitor to restart worker process?
8.1. How to Write a function that starts and monitors several worker processes. If any of the worker processes dies abnormally, restart it.
Suppose we have the following worker
-module(worker). -compile([export_all]). %% Run a worker worker() -> io:format("I am worker ~p~n", [self()]), receive stop -> io:format("received stop, so stop work now~n"); Message -> io:format("receive: ~p~n", [Message]), worker() after 2000 -> worker() end.
We could create multiple monitor process and spawn_monitor
for each worker process individually.
on_exit_aux(F) -> {Pid, Ref} = spawn_monitor(F), io:format("waiting message from{~p, ~p}~n", [Pid, Ref]), receive stop -> % this pattern makes us could stop workers by their monitoring process io:format("stop monitored process ~p~n", [Pid]), Pid ! stop; % stop worker process normally {'DOWN', Ref, process, Pid, normal} -> io:format("~p exited, because of normal~n", [Pid]); {'DOWN', Ref, process, Pid, Why} -> io:format("~p exited abnormally, because of ~p~n", [Pid, Why]), on_exit_aux(F) end. on_exit(F) -> spawn(fun () -> on_exit_aux(F) end). %% L is a list of functions as worker %% It returns a list of Pids which each of them monitor a worker and keep it a live. cluster01(L) -> lists:reverse(cluster01(L, [])). cluster01([], Acc)-> Acc; cluster01([H|T], Acc) -> cluster01(T, [on_exit(H)|Acc]). stop_cluster01([]) -> void; stop_cluster01([H|T]) -> spawn(fun () -> H ! stop, % send stop to Pid, Pid could be monitor process or worker process stop_cluster01(T) end).
At last, we start them
q05(Num) -> observer:start(), lib_misc:cluster01([fun worker:worker/0 || _ <- lists:seq(1, Num)]).
8.2. Write a function that starts and monitors several worker processes. If any of the worker processes dies abnormally, kill all the worker processes and restart them all.
Suppose, we use the same worker. Now, we prefer to create only one monitor process. And inside it, we spawn_monitor
multiple worker processes.
create_workers(Num) -> Workers = [ %% {{Pid, Ref}, I} spawn_monitor(worker, worker, []) || _ <- lists:seq(1, Num) ], monitor_workers(Workers). monitor_workers(Workers) -> io:format("~p monitoring workers ~p~n", [self(), Workers]), Pids = lists:map(fun ({Pid, Ref}) -> Pid end, Workers), receive stop -> % stop all workers io:format("stop all workers~n"), stop_workers(Pids); restart -> io:format("restart all workers~n"), restart_workers(Pids, nil); {'DOWN', Ref, process, Pid, normal} -> io:format("~p exited, because of normal~n", [Pid]), monitor_workers(Pids); {'DOWN', Ref, process, Pid, Why} -> io:format("~p exited abnormally, because of ~p~n", [Pid, Why]), io:format("restart all workers~n"), restart_workers(Pids, Pid); _ -> monitor_workers(Pids) end. restart_workers(Pids, Pid) -> N = length(Pids), StoppingWorkers = lists:filter(fun (X) -> X =/= Pid end, Pids), stop_workers(StoppingWorkers), io:format("~p have been stopped~n", [StoppingWorkers]), spawn(fun () -> create_workers(N) end). stop_workers(Pids) -> [Pid ! stop || Pid <- Pids].
Now, use them from main module
q06(Num) -> observer:start(), spawn(fun () -> lib_misc:create_workers(Num) end).
8.3. Summary
spawn_monitor
+ receive...end
once make me very confused. I either puth the monitor process keep waiting on shell, or doesn’t receive any message from monitor process when its monitoring process exit. The reasons are:
receive...end
is synchronous, so it will make current process keep waiting. The solution is tospawn
the monitor process from an anonymous function (lambda function).spawn_monitor
should be used withreceive...end
in the SAME process. So, afterspawn_monitor
we should NOTreceive...end
in another process (do NOTspawn
an anonymous function forreceive...end
). Otherwise, you won’t receive messages sent to monitoring process, you are receiving messages sent to the anonymous function.- Just remember,
spawn_monitor
andreceive...end
must be used in the SAME process. Because whatreceive...end
does is receiving the messages sent to the current process which is wherespawn_monitor
is called. - One clause of
receive...end
probably contains a recursive structure which doesspawn_monitor
to restart function some functionF
.