Global Cat Day is observed annually on October 16.
HOW TO OBSERVE
Use #GlobalCatDay to post on social media.
Global Cat Day (2017) takes the place of National Feral Cat Day which was initiated by Alley Cat Allies in 2001.
|You're viewing doire's Reading Page|
Create a Dreamwidth Account Learn More
The National Symbols Officer of Australia recently wrote to Juice Media, producers of Rap News and Honest Government Adverts, suggesting that its “use” of Australia’s coat of arms violated various Australian laws. This threat came despite the fact that Juice Media’s videos are clearly satire and no reasonable viewer could mistake them for official publications. Indeed, the coat of arms that appeared in the Honest Government Adverts series does not even spell “Australian” correctly.
It is unfortunate that the Australian government cannot distinguish between impersonation and satire. But it is especially worrying because the government has proposed legislation that would impose jail terms for impersonation of a government agency. Some laws against impersonating government officials can be appropriate (Australia, like the U.S., is seeing telephone scams from fraudsters claiming to be tax officials). But the proposed legislation in Australia lacks sufficient safeguards. Moreover, the recent letter to Juice Media shows that the government may lack the judgment needed to apply the law fairly.
In a submission to Parliament, Australian Lawyers for Human Rights explains that the proposed legislation is too broad. For example, the provision that imposes a 2 year sentence for impersonation of a government agency does not require any intent to deceive. Similarly, it does not require that any actual harm was caused by the impersonation. Thus, the law could sweep in conduct outside the kind of fraud that motivates the bill.
The proposed legislation does include an exemption for “conduct engaged in solely for genuine satirical, academic or artistic purposes.” But, as critics have noted, this gives the government leeway to attack satire that it does not consider “genuine.” Similarly, the limitation that conduct be “solely” for the purpose of satire could chill speech. Is a video produced for satirical purposes unprotected because it was also created for the purpose of supporting advertising revenue?
Government lawyers failing to understand satire is hardly unique to Australia. In 2005, a lawyer representing President Bush wrote to The Onion claiming that the satirical site was violating the law with its use of the presidential seal. The Onion responded that it was “inconceivable” that anyone would understand its use of the seal to be anything but parody. The White House wisely elected not to pursue the matter further. If it had, it likely would have lost on First Amendment grounds. Australia, however, does not have a First Amendment (or even a written bill of rights) so civil libertarians there are rightly concerned that the proposed law against impersonation could be used to attack political commentary. We hope the Australian government either kills the bill or amends the law to include both a requirement of intent to deceive and a more robust exemption for satire.
In its own style, Juice Media has responded to the proposed legislation with an “honest” government advert.
‘Australien Government’ coat of arms Juice Media, CC BY-NC-SA 3.0 AU
Cartographer Geraldine Sarmiento from Mapzen explores the drawing forms in cartography, such as lines, bridges, and buildings.
What is the visual language of cartography? Let’s explore this question through the medium of drawing. After all, it is this abstract representation of place onto a surface of fewer dimensions that the act of cartography entails.
Be sure to check out the Morphology tool to poke at the forms yourself.
Every day a little death, in the parlour, in the bed. In the lips and in the eyes. In the curtains in the silver, in the buttons, in the bread, in the murmurs, in the gestures, in the pauses, in the sighs. – Sondheim
The most horrible sound in the world is that of a reviewer asking you to compare your computational method to another, existing method. Like bombing countries in the name of peace, the purity of intent drowns out the voices of our better angels as they whisper: at what cost.
Before the unnecessary drama of that last sentence sends you running back to the still-open browser tab documenting the world’s slow slide into a deeper, danker, more complete darkness that we’ve seen before, I should say that I understand that for most people this isn’t a problem. Most people don’t do research in computational statistics. Most people are happy.
So why does someone asking for a comparison of two methods for allegedly computing the same thing fill me with the sort of dread usually reserved for climbing down the ladder into my basement to discover, by the the light of a single, swinging, naked lightbulb, that the evil clown I keep chained in the corner has escaped? Because it’s almost impossible to do well.
Many many years ago, when I still had all my hair and thought it was impressive when people proved things, I did a PhD in numerical analysis. These all tend to have the same structure:
Which is to say, I’ve done my share of simulation studies comparing algorithms.
So what changed? When did I start to get the fear every time someone mentioned comparing algorithms?
Well, I left numerical analysis and moved to statistics and I learnt the one true thing that all people who come to statistics must learn: statistics is hard.
When I used to compare deterministic algorithms it was easy: I would know the correct answer and so I could compare algorithms by comparing the error in their approximate solutions (perhaps taking into account things like how long it took to compute the answer).
But in statistics, the truth is random. Or the truth is a high-dimensional joint distribution that you cannot possibly know. So how can you really compare your algorithms, except possibly by comparing your answer to some sort of “gold standard” method that may or may not work.
[No I don’t speak Swedish, but one of my favourite songwriters/lyricists does. And sometimes I’m just that unbearable. Also the next part of this story takes place in Norway, which is near Sweden but produces worse music (Susanne Sunfør and M2M being notable exceptions)]
The first two statistical things I ever really worked on (in an office overlooking a fjord) were computationally tractable ways of approximating posterior distributions for specific types of models. The first of these was INLA. For those of you who haven’t heard of it, INLA (and it’s popular R implementation R-INLA) is a method for doing approximate posterior computation for a lot of the sorts of models you can fit in rstanarm and brms. So random effect models, multilevel models, models with splines, and spatial effects.
At the time, Stan didn’t exist (later, it barely existed), so I would describe INLA as being Bayesian inference for people who lacked the ideological purity to wait 14 hours for a poorly mixing BUGS chain to run, instead choosing to spend 14 seconds to get a better “approximate” answer. These days, Stan exists in earnest and that 14 hours is 20 minutes for small-ish models with only a couple of thousand observations, and the answer that comes out of Stan is probably as good as INLA. And there are plans afoot to make Stan actually solve these models with at least some sense of urgency.
Working on INLA I learnt a new fear: the fear that someone else was going to publish a simulation study comparing INLA with something else without checking with us first.
Now obviously, we wanted people to run their comparisons past us so we could ruthlessly quash any dissent and hopefully exile the poor soul who thought to critique our perfect method to the academic equivalent of a Siberian work camp.
Or, more likely, because comparing statistical models is really hard, and we could usually make the comparison much better by asking some questions about how it was being done.
Sometimes, learning from well-constructed simulation studies how INLA was failing lead to improvements in the method.
But nothing could be learned if, for instance, the simulation study was reporting runs from code that wasn’t doing what the authors thought it was. And I don’t want to suggest that bad or unfair comparisons comes from malice (for the most part, we’re all quite conscientious and fairly nice), but rather that they happen because comparing statistical algorithms is hard.
And comparing algorithms fairly where you don’t understand them equally well is almost impossible.
Why am I bringing this up? It’s because of the second statistical thing that I worked on while I was living in sunny Trondheim (in between looking at the fjord and holding onto the sides of buildings for dear life because for 8 months of the year Trondheim is a very pretty mess of icy hills).
During that time, I worked with Finn Lindgren and Håvard “INLA” Rue on computationally efficient approximations to Gaussian random fields (which is what we’re supposed to call Gaussian Processes when the parameter space is more complex than just “time” [*shakes fist at passing cloud*]). Finn (with Håvard and Johan Lindström) had proposed a new method, cannily named the Stochastic Partial Differential Equation (SPDE) method, for exploiting the continuous-space Markov property in higher dimensions. Which all sounds very maths-y, but it isn’t.
The guts of the method says “all of our problems with working computationally with Gaussian random fields comes from the fact that the set of all possible functions is too big for a computer to deal with, so we should do something about that”. The “something” is replace the continuous function with a piecewise linear one defined over a fairly fine triangulation on the domain of interest.
But why am I talking about this? Sorry. One day I’ll write a short post.
A very exciting paper popped up on arXiv on Monday comparing a fairly exhaustive collection of recent methods for making spatial Gaussian random fields more computationally efficient.
Why am I not cringing in fear? Because if you look at the author list, they have included an author from each of the projects they have compared! This means that the comparison will probably be as good as it can be. In particular, it won’t suffer from the usual problem of the authors understanding some methods they’re comparing better than others.
So how did they go? Well, actually, they did quite well. I like that
But I’m an academic statistician. And our key feature, as a people, is that we loudly and publicly dislike each other’s work. Even the stuff we agree with. Why? Because people with our skills who also have impulse control tend to work for more money in the private sector.
So with that in mind, let’s have some fun.
(Although seriously, this is the best comparison of this type I’ve ever seen. So, really, I’m just wanting it to be even bester.)
So what’s wrong with it?
The most obvious problem with the comparison is that the problem that these methods are being compared on is not particularly large. You can see that from the timings. Almost none of these implementations are sweating, which is a sign that we are not anywhere near the sort of problem that would really allow us to differentiate between methods.
So how small is small? The problem had 105,569 observations and required prediction at at most 44,431 other locations. To be challenging, this data needed to be another order of magnitude bigger.
(Can you tell what I’m listening to?)
The second problem with the comparison is that the problem is tooooooo easy. As the data is modelled with a Gaussian observation noise and a multivariate Gaussian latent random effect, it is a straightforward piece of algebra to eliminate all of the latent Gaussian variables from the model. This leads to a model with only a small number of parameters, which should make inference much easier.
How do you do that? Well, if the data is , the Gaussian random field is and and all the hyperparmeters . In this case, we can use conditional probability to write that
which holds for every value of and particularly . Hence if you have a closed form full conditional (which is the case when you have Gaussian observations), you can write the marginal posterior out exactly without having to do any integration.
A much more challenging problem would have had Poisson or binomial data, where the full conditional doesn’t have a known form. In this case you cannot do this marginalisation analytically, so you put much more stress on your inference algorithm.
I guess there’s an argument to be made that some methods are really difficult to extend to non-Gaussian observations. But there’s also an argument to be made that I don’t care.
The prediction quality is measured in terms of mean squared error and mean absolute error (which are fine), the continuous rank probability score (CRPS) and and the Interval Score (INT), both of which are proper scoring rules. Proper scoring rules (and follow the link or google for more if you’ve never heard of them) are the correct way to compare probabilitic predictions, regardless of the statistical framework that’s used to make the predictions. So this is an excellent start!
But one of these measures does stand out: the prediction interval coverage (CVG) which is defined in the paper as “the percent of intervals containing the true predicted value”. I’m going to parse that as “the percent of prediction intervals containing the true value”. The paper suggests (through use of bold in the tables) that the correct value for CVG is 0.95. That is, the paper suggests the true value should lie within the 95% interval 95% of the time.
This is not true.
Or, at least, this is considerably more complex than the result suggests.
Or, at least, this is only true if you compute intervals that are specifically built to do this, which is mostly very hard to do. And you definitely don’t do it by providing a standard error (which is an option in this competition).
So what’s wrong with CVG?
Why? Well first of all it’s a multiple testing problem. You are not testing the same interval multiple times, you are checking multiple intervals one time each. So it can only be meaningful if the prediction intervals were constructed jointly to solve this specific multiple testing problem.
Secondly, it’s extremely difficult to know what is considered random here. Coverage statements are statements about repeated tests, so how you repeat them will affect whether or not a particular statement is true. It will also affect how you account for the multiple testing when building your prediction intervals. (Really, if anyone did opt to just return standard errors, nothing good is going to happen for them in this criterion!)
Thirdly, it’s already covered by the interval score. If your interval is with nominal level , the interval score is for an observation y is
This score (where smaller is better) rewards you for having a narrow prediction interval, but penalises you every time the data does not lie in the interval. The score is minimised when . So this really is a good measure of how well the interval estimate is calibrated that also checks more aspects of the interval than CVG (which lacks the first term) does.
Any conversation about how to evaluate the quality of an interval estimate really only makes sense in the situation where everyone has constructed their intervals the same way. Now the authors have chosen not to provide their code, so it’s difficult to tell what people actually did. But there are essentially four options:
But how well these different options work will depend on how they’re being assessed (or what they’re being used for).
Option 1: We want to fill in our sparse observation by predicting at more and more points
(This is known as “in-fill asymptotics”). This type of question occurs when, for instance, we want to fill in the holes in satellite data (which are usually due to clouds).
This is the case that most closely resembles the design of the simulation study in this paper. In this case you refine your estimated coverage by computing more prediction intervals and checking if the true value lies within the interval.
Most of the easy to find results about coverage in these is from the 1D literature (specifically around smoothing splines and non-parametric regression). In these cases, it’s known that the first option is bad, the second option will lead to conservative regions (the coverage will be too high), the third option involves some sophisticated understanding of how Gaussian random fields work, and the fourth is not something I know anything about.
Option 2: We want to predict at one point, where the field will be monitored multiple times
This second option comes up when we’re looking at a long-term monitoring network. This type data is common in environmental science, where a long term network of sensors is set up to monitor, for example, air pollution. The new observations are not independent of the previous ones (there’s usually some sort of temporal structure), but independence can often be assumed if the observations are distant enough in time.
In this case, Option 1 will be the right way to construct your interval, option 2 will probably still be a bit broad but might be ok, and options 3 and 4 will probably be too narrow if the underlying process is smooth.
Option 3: Mixed asymptotics! You do both at once
Simulation studies are the last refuge of the damned.
So what are my suggestions for making this comparison better (other than making it bigger, harder, and dumping the weird CVG criterion)?
What do I mean by that? Well in the simulation study, the paper only considered one possible set of data simulated from the correct model. All of the results in their Table 2, which contains the scores, and timings on the simulated data, depends on this particular realisation. And hence Table 2 is a realisation of a random variable that will have a mean and standard deviation.
This should not be taken as an endorsement of the frequentist view that the observed data is random and estimators should be evaluated by their average performance over different realisation of the data. This is an acknowledgement of the fact that in this case the data is actually a realisation of a random variable. Reporting the variation in Table 2 would give an idea of the variation in the performance of the method. And would lead to a more nuanced and realistic comparison of the methods. It is not difficult to imagine that for some of these criteria there is no clear winner when averaged over data sets.
I have very mixed feelings about the timings column in the results table. On one hand, an “order of magnitude” estimate of how long this will actually take to fit is probably a useful thing for a person considering using a method. On the other hand, there is just no way for these results not to be misleading. And the paper acknowledges this.
Similarly, the competition does not specify things like priors for the Bayesian solutions. This makes it difficult to really compare things like interval estimates, which can strongly depend on the specified priors. You could certainly improve your chances of winning on the CVG computation for the simulation study by choosing your priors carefully!
I haven’t really talked about the real data performance yet. Part of this is because I don’t think real data is particularly useful for evaluating algorithms. More likely, you’re evaluating your chosen data set as much as, or even more than, you are evaluating your algorithm.
Why? Because real data doesn’t follow the model, so even if a particular method gives a terrible approximation to the inference you’d get from the “correct” model, it might do very very well on the particular data set. I’m not sure how you can draw any sort of meaningful conclusion from this type of situation.
I mean, I should be happy I guess because the method I work on “won” three of the scores, and did fairly well in the other two. But there’s no way to say that wasn’t just luck.
What does luck look like in this context? It could be that the SPDE approximation is a better model for the data than the “correct” Gaussian random field model. It could just be Finn appealing to the old Norse gods. It’s really hard to tell.
If any real data is to be used to make general claims about how well algorithms work, I think it’s necessary to use a lot of different data sets rather than just one.
Similarly, a range of different simulation study scenarios would give a broader picture of when different approximations behave better.
One more kiss before we part: This field is still alive and kicking. One of the really exciting new ideas in the field (that’s probably too new to be in the comparison) is that you can speed up the computation of the unnormalised log-posterior through hierarchical decompositions of the covariance matrix (there is also code). This is a really neat method for solving the problem and a really exciting new idea in the field.
There are a bunch of other things that are probably worth looking at in this article, but I’ve run out of energy for the moment. Probably the most interesting thing for me is that a lot of the methods that did well (SPDEs, Predictive Processes, Fixed Rank Kriging, Multi-resolution Approximation, Lattice Krig, Nearest-Neighbour Predictive Processes) are cut from very similar cloth. It would be interesting to look deeper at the similarities and differences in an attempt to explain these results.
The post Barry Gibb came fourth in a Barry Gibb look alike contest appeared first on Statistical Modeling, Causal Inference, and Social Science.
My name is Alessandra (called Sandy) from Italy.
At first, sorry for my English!
I’d like to tell you that you are my favorite illustrator !
I met you in Lucca comics & games in far 2005 during an interview of Ragno Magazine, do you remember?
In that time, you draw me a play card of munchkin “a lot of very nice balloons”, but my boyfriend lost my card and I cry.
I love Munchkin illustration!
In Lucca comics & games 2014 I went to Lucca only for you, but during your signed session, Lucca’s security couldn’t enter in Games palace .
So, I’d like to know if you will came in Italy again , and finally say hallo to you!
Thanks a lot for your kindness and enjoy yourself!
First off, thank you so much for the VERY kind words! Your English is MUCH better than my Italian, so you have nothing to apologize for!
I’d love to come back to Italy, and soon. When I was in school, in England, we’d spend our summers outside of Milan. I miss it terribly.
The problem with Lucca is, it usually falls on Halloween, and I really try to spend holidays with my wife and daughter. But I do have a many friends there, and I miss them all. It’s also one of my all-time favorite conventions. So…possibly..?
I’m sorry I missed you at Lucca 2014 – it was a crazy huge convention. I’m going over my 2018 schedule now: if I’m not back at Lucca next year, perhaps there will be another Italian show.
In any case, Italy’s definitely top of my list to get back to, and soon! And I’ll certainly re-draw you that card. Tell your boyfriend he loses a level!
With many thanks,
Raj Patel & Jason Moore. A History of the World in Seven Cheap Things: A Guide to Capitalism, Nature, and the Future of the Planet. University of California Press, 2017.
I was pleased to do a blurb for this one:
This is a highly original, brilliantly conceptualized analysis of the effects of capitalism on seven key aspects of the modern world. Written with verve and drawing on a range of disciplines, A History of the World in Seven Cheap Things is full of novel insights.
What are the seven things so cheap that they are not valued appropriately?
Read the book to connect the dots. As Patel and Moore conclude, if what they say “sounds revolutionary, so much the better.”
Someone pointed me to a blog post, Negative Psychology, from 2014 by Jim Coan about the replication crisis in psychology.
My reaction: I find it hard to make sense of what he is saying because he doesn’t offer any examples of the “negative psychology” phenomenon that he discussing. I kinda get annoyed when people set themselves up as the voice of reason but don’t ever get around to explaining what’s the unreasonable thing they dislike.
I read more by Coan and he seems to me making a common mistake which is to conflate scientific error with character flaw. He thinks that critics of bad research are personally criticizing scientists. And, conversely, since he knows that scientists are mostly good people, he resists criticism of their work. Well, hey, probably 500 years ago most astrologers were good people too, but this doesn’t mean their work was any good to anyone! It’s not just about character, it’s also about data and models and methods. One reason I prefer to use the neutral term “forking paths” rather than the value-laden term “p-hacking” is that I want to emphasize that scientists can do bad work, even if they’re trying their best to do good work. I have no reason to think that John Bargh, Roy Baumeister, Ellen Langer, etc. want to be pushing around noise and making unreplicable claims. I’m sure they’d love to do good empirical science and they think they are. But, y’know, GIGO.
Good character is not enough. All the personal integrity in the world won’t help you if your measurements are super-noisy and if you’re using statistical methods that don’t work well in the presence of noise.
And, of course, once NPR, Gladwell, and Ted talks get involved, all the incentives go in the wrong direction. Researchers such as Coan have every motivation to exaggerate and very little motivation to admit error or even uncertainty.
My correspondent responds:
This is also a problem in medicine as I am sure you already know. This effect should be named: so much noise it makes you deaf to constructive criticism :) Unfortunately, this affects many people’s lives and I think it should be brought to light. Besides, constructive criticism is one of the pillars of science.
As Karl Pearson wrote in 1900:
In an age like our own, which is essentially an age of scientific injury, the prevalence of doubt and criticism ought not to be regarded with despair or as a sign of decadence. It is one of the safeguards of progress; la critique est la vie de la science, I must again repeat. One of the most fatal (and not so impossible) futures for science would be the institution of a scientific hierarchy which would brand as heretical all doubt as to its conclusions, all criticism of its results.
P.S. This post happens to be appearing shortly after a discussion on replicability and scientific criticism. Just a coincidence. I wrote the post several months ago (see here for the full list).
The post “La critique est la vie de la science”: I kinda get annoyed when people set themselves up as the voice of reason but don’t ever get around to explaining what’s the unreasonable thing they dislike. appeared first on Statistical Modeling, Causal Inference, and Social Science.
It’s very easy for some groups of humans to slip into a lazy way of thinking about our planet. They look around and think it was made for us, in some cases literally so. Air, water, land, resources to exploit… the Earth is ours for the taking.
Not everyone feels this way, of course, but enough do — and have enough power — to influence a great many other people.
Others know better. As a group, one of the more convincing viewpoints counter to this comes from scientists. When we look at the Earth carefully, understand it through the filter of trying to learn from what it’s showing us rather than simply taking from it what we want, we find out something very, very important: The Earth is under no obligation whatsoever to nurture us.
Quite the opposite, in fact. If you look at the planet another way, it seems like it’s constantly trying to kill us. An animation put out by the Pacific Tsunami Warning Center makes that very, very obvious: It shows every recorded earthquake from Jan. 1, 2000 to Dec. 31, 2015.
Yeah. The rate of the video is 30 days of earthquakes displayed per second. Each flash is an earthquake, with the magnitude of the quake displayed as a scaled circle (after a moment each quake fades and shrinks in size so it doesn’t obscure subsequent activity).
Watching the video, it almost seems like the Earth is alive. Of course, that’s another illusion, an anthropomorphistic filter our brains like to employ.
But it isn’t alive, and neither was it created for us, nor is it trying to kill us. It just exists as the laws of nature define. In fact, it is we who have over millions of generation of life adapted to it. And by no means has that been an easy task; the multiple mass extinctions life has undergone over the past several billion years are testament to that.
But this animation shows one thing very clearly: We take the Earth for granted at our peril. Small earthquakes can do heavy damage if we are not prepared, and large ones can spread that devastation over huge distances.
And we tamper with our planet at our own risk, as well. Run the video again (at 2X speed if that helps) and keep your eyes on Oklahoma, in the United States. You’ll see virtually no earthquakes there until 2008 or so. Then, suddenly, they bloom, dozens of them. Why? Because of wastewater from oil extraction injected into wells.
I won’t make any Frankensteinian parallels here, but it’s worth noting that when we tamper with the Earth, it sometimes tampers back. The environment is in a dynamic equilibrium, ever-changing but balanced. That balance can be upset though, even by such creatures small as we. Off the top of my head, the fact that we dump 40 billion extra tons of carbon dioxide into the air every year means the Earth will respond in some way. Many ways, in fact, none of them good.
Perhaps Isaac Newton wasn’t thinking of this when he crafted his Third Law of Motion, but as we have seen over and again, our actions sometimes produce equal and opposite reactions. Sometimes unequal, with the effects far outstripping the causes, like climate change. But that does seem to be a lesson here; we do something because it seems helpful or useful, then find out what we’re doing is making things worse for ourselves.
Science has no moral for us; it is a tool, like a shovel or a hammer. Any tool can be used for good or for ill, and it’s up to us to decide which. But the beauty of science is that it can be used to help us make that decision a wise one.
Ignoring it, well, that would be foolish. But many fools love power, don’t they?
Of course, that power is in many cases given to them by us. That’s a decision we need to make more wisely as well.
Tip o’ the strike-slip fault to Kris McCall.2
Six journalists — three in jail and three on bail — are facing lengthy jail terms in an indictment focusing on leaked emails from Berat Albayrak, Turkey’s energy minister and president Erdogan’s son-in-law. The first hearing in their case will be held on 24 October at Istanbul Çaglayan Courthouse.
Dawn raids were conducted on 25 December 2016 following an investigation into Albayrak’s leaked emails. Tunca Öğreten, a former editor of Diken, an opposition news portal in Turkey, Ömer Çelik, the news editor of the pro-Kurdish Dicle News Agency and Mahir Kanaat, an employee of BirGun, a left-wing opposition newspaper, were sent to prison without charges after 24 days in custody while Derya Okatan, Eray Sargın and Metin Yoksu were released on bail.
RedHack, a group of Marxist hackers, admitted responsibility for the cyber attack in September 2017 and added a number of Turkish journalists to a private Twitter direct messaging group without anyone’s consent. Once the minister’s emails were made public, journalists then reported about the leak, filtering the information based on the public’s right to know.
“State secrets” on a personal email account
Based on the contents of the emails, Tunca Ogreten reported about Albayrak’s alleged executive role in an oil transportation company called PowerTrans (which still operates in the Kurdish Region of Iraq).
Long before the leaks, a suspected link between Albayrak and PowerTrans had already made the news after the Turkish government granted a special status to the company – an allegation officials strongly denied.
After three journalists – Celik, Kanaat and Ogreten – spent seven months in pretrial detention, without knowing what they have been charged with, prosecution finally filed an indictment in July, claiming that the information in Albayrak’s personal (Gmail, Hotmail and Yahoo) email accounts could be considered as “state secrets depending on circumstances”.
The prosecution also accused all journalists of manipulating contents of the emails, without explaining how, and alleged that they tried “creating a negative perception for the failure of [Turkey’s] national energy policy”.
Ogreten appealed against claims about his alleged links with DHKP-C, an extreme leftist armed group, listed as a terror organisation in Turkey, however, the prosecutor dismissed his rejections and insisted on his guilt by association, arguing that RedHack was connected to DHKP-C, therefore, so was he.
Adding to the obscurity of charges, Ogreten is also accused of committing crimes on behalf of FETÖ/PDY, the pro-Islamic network led by US-based cleric Fethullah Gulen that Ankara recently named as a terror organisation.
The only evidence the prosecutor sets forth for this allegation is Ogreten’s previous work experience in Taraf, a pro-Gulen newspaper, where many of today’s popular pro-government columnists have also written for.
Taraf newspaper was among dozens of media outlets that the Turkish government shut down in statutory decrees, based on their alleged links with terror groups, including the Gulen organization, or FETO, that Ankara claims masterminded last year’s coup attempt.
Daily BirGün’s employee accused being a member of FETO
The indictment includes no reference about BirGün’s coverage of the RedHack leaks but makes a note that Mahir Kanaat, one of its employees, followed RedHack’s accounts on Twitter.
In an apparent ideological contradiction, Kanaat is also accused of being a member of the pro-Islamic FETO movement, based on two Word documents found on his mobile.
Both documents are copies of the official police investigation records about the 2013 graft probe that entangled several cabinet ministers and President Erdogan’s close relatives. The government accuses FETO-linked police having triggered the probe and prosecutors often present documents regarding the probe found in devices as a proof of suspects’ organizational links.
In Kanaat’s case, they pointed at the date on both documents, saying it showed a time before the probes were made public, leading to an accusation that the journalist had an early access to FETO-linked police documents through his organisational connections.
What is dismissed here, however, is that Word documents always come with an unchangeable creation date that keeps reappearing even if one downloaded them today. Therefore, an early access is a baseless accusation, used only to frame the journalist.
Furthermore, the prosecutor also turned a blind eye on BirGun’s highly consistent and critical coverage of Fethullah Gulen, the leader of FETO.
A mishmash of accusations
Omer Celik, the Diyarbakir bureau chief and editor of the pro-Kurdish Dicle News Agency, is another journalist accused of spreading “propaganda of a terrorist organization” through his tweets.
His work relationship with DIHA, one of the outlets that were shut down by the government in a statutory decree for their alleged terror links, is the only evidence presented in the indictment.
Three other journalists, Okatan, Sargin and Yoksu who were released on bail, are also accused of spreading “propaganda for a terrorist organisation”.
Okatan and Sargin, two news editors, are accused of ‘guilt by association as tweets in question were sent on company accounts, not private accounts. Majority of the tweets that are quoted in relation to charges set against Yoksu are news updates.
The indictment that centres around the RedHack leaks of Berat Albayrak’s emails includes journalists who did not even report about the leak.
In a cocktail of accusations, all journalists are presented in alleged links with various terror organisations, in a wide range of ideologies from pro-Islamist FETO to Marxist-Leninist MLKP.
The indictment also accuses all six journalists of “intercepting and disrupting information systems, and destroying or altering data” without providing any explanation as to what or how exactly they have intercepted or altered the data.
A separate case for Deniz Yücel
Although Deniz Yucel, the Turkey correspondent of Germany’s Die Welt, was issued an arrest warrant as part of the investigation looking into the RedHack leak, he was posed no questions about RedHack.
Yucel has been kept in solitary confinement for nearly a year, without any official charges. His reports about the Kurdish conflict were presented as the reason for his arrest in February.
The first hearing is on 24 October
Despite all apparent contradictions in the indictment, Mahir Kanaat, Omer Celik and Tunca Ogreten have been in jail for 296 as of 16 October 2017.
Six journalists, including those that were released on bail, will stand in court for the first time after almost a year.
There are more than 170 journalists in Turkish jails now. No matter how many cases that makes, we need your uninterrupted support in defending all. These hearings, as frequent as they are, should never be treated as commonplace.
The post Turkey: Solidarity with journalists falsely accused of leaking government emails appeared first on Index on Censorship.
Battle of Ideas 2017
Can satire survive the era of fake news?
Women vs feminism: Do we all need liberating from the gender wars?
Censorship and identity: Free speech for you but not for me?
Political activism and protest today
With a paradoxically destructive optimism, satirists, from the age of the Roman poet Juvenal and since, have been driven by an almost childlike conviction that the world can and should do better. And the satirists of today, apostates as they are from the modern religion of political correctness – an orthodoxy that (despite professing to be both) is neither moral nor intellectual – need set their sights no further than their own milieu for the necessary targets.
A little over a year ago, at the close of the 2016 Edinburgh Fringe Festival, I presented the Defining the Norm Awards, an Oscars-styled lampooning of stand-up comedy banality and the predatory entertainment industry which fuels it. My intent was to unveil a satirical blueprint of how the mundane is cynically transferred from open mic to telly screen. And of all the sacred cows I have sought to slaughter in my twenty-year career as a satirist – from modern psychiatry to Islam – the current state of Western comedy was by far the most fanatically defended, if only by its practitioners.
What resulted was a tidal wave of social media whinging, suspicions cast upon my mental well-being, and a blacklisting that continues to this day from live bookers all the way up to the BBC comedy department. (“We can’t use Will Franken,” is the word from staff insiders on those rare occasions when my name is put forward for a project. “Remember, he’s the guy that did those awards.”) One thing, however, that was not in evidence in the wake of my mockery was anything resembling a satirical counter-response from the comedy collective. A point, I felt, had been painfully proven.
Because the disquieting truth in our present age is that those least qualified to understand, let alone appreciate, satire are too often comedians themselves. And to attack those who make false pretence to satire is to simultaneously attack a multitude of unquestioned shibboleths – be it lazy reliance on identity politics, Donald Trump’s presumed unfitness to be president, or even the sanctimonious mourning over Britain’s exit from the European Union.
Yet leaving aside, for example, the sheer repetitiveness and predictability of Nigel Farage and Donald Trump putdowns, what makes such political targets ultimately ineffective as contemporary satirical fodder is simply this: Farage and Trump are funnier than most comedians. Both figures, after all, managed to accomplish, in quick succession, major acts of geopolitical subversion against the status quo. Once in the not-too-distant past, this would have been the objective of comedy.
Though such an observation remains anathema to current entertainment establishment, such is the short-sightedness of effective satirists that rarely do they think ahead in terms of people-pleasing career advancement. Rather, he or she is compelled by an attribute especially repulsive to today’s current crop of entertainers: morality.
For amidst all the speculation amongst comedians as to why I decided to hold those in my field up to ridicule, the simple – and therefore baffling – truth was that I ridiculed them because I believed they needed to be ridiculed.
The post Will Franken: Nigel Farage and Donald Trump are funnier than most comedians appeared first on Index on Censorship.
Advertisement recently spotted by Guy Freeman in the Central, Hong Kong MTR (subway) station:
It's a mixture of Chinese and English, of simplified and traditional characters. In this post, I will focus on the calligraphically written slogan on the right side of the poster:
Hǎinèi cún 'zhī'jǐ, let's zhīfùbǎo
This slogan is not easy to translate. Consequently, before attempting to do so, I will explain some of the more elusive aspects of these two clauses / lines.
First of all, the zhī 支 inside single Chinese quotation marks in the first clause has more than two dozen different meanings, including "support, sustain, raise, bear, put up, prop up, draw money, pay, pay money, disburse, check / cheque, defray, protrude, put off, put somebody off, send away, branch, stick, offshoot, twelve earthly branches, a surname, division, subdivision, auxiliary verb, measure word for troops". For the moment, I'll refrain from attempting to translate it in the present context.
In the second clause, zhī 支 is part of the disyllabic word zhīfù 支付 ("pay [money]; defray"), which, in turn, is part of the trademark Zhīfùbǎo 支付宝 ("Alipay", China's clone of PayPal). Being the name of a company, Zhīfùbǎo 支付宝 ("Alipay") is a noun. However, since it here follows "let's" to form a first person plural command, it is acting as a verb: "let's Zhīfùbǎo 支付宝" ("let's Alipay").
When we realize that the first clause is a literary allusion, it gets even trickier. The first clause is perfectly homophonous with and echoes the first line of this couplet by the Tang poet, Wang Bo 王勃 (650-676):
hǎinèi cún zhījǐ, tiānyá ruò bǐlín
"When you have a close friend in the world, the far ends of heaven are like next door."
Thus 'zhī'jǐ「支」己 (lit., "pay self") is a pun for zhījǐ 知己 ("bosom / close / intimate friend; confidant[e]; soulmate", lit., "know-self").
I would translate the whole couplet this way:
"You have a bosom friend (pay pal) everywhere, let's Alipay"
Guy notes that the ad "is from Alipay, a subsidiary of Alibaba, a very large Internet company from China. This shows the occasional outbursts from Chinese officials about defeating English to be useless at best."
Last question: why did they use the English word "let's" instead of the equivalent Mandarin, "ràng wǒmen 让我们" or "ràng wǒmen yīqǐ 让我们一起"? But that's three or five syllables instead of one, so it sounds clumsy and clunky instead of neat and crisp the way an ad should be.
If they wanted to avoid the English "let's" and use only Chinese, they could have written something like this:
yīqǐ Zhīfùbǎo 一起支付宝 ("together Alipay")
To tell the truth, in terms of rhythm, idiomaticity, and catchiness, that actually sounds better than "let's Zhīfùbǎo 支付宝 ('let's Alipay')" when paired with "Hǎinèi cún 'zhī'jǐ 海内存「支」己" ("You have a bosom friend [pay pal] everywhere").
Bottom line: they wanted to sound international, since Alipay has global aspirations.
There have been many earlier posts on multiscriptalism and multilingualism involving numerous languages and scripts. Here are some that specifically feature Chinese:
This is not an exhaustive list.
[Thanks to Fangyi Cheng, Yixue Yang, and Jinyi Cai]
Yes, you read it right, that’s “gound.” Justin E. H. Smith’s unsettling… essay? … for The Public Domain Review will explain it. Eventually. It begins (after a brief bit of throat-clearing):
Benno Guerrier von Klopp (1816–1903) was a Baltic German philologist, of French Huguenot origin, who studied at the University of Saint Petersburg and made most of his career as an academician ordinarius, while also spending a good portion of his later career at Jena. Klopp is remembered principally for his contributions to the study of Baltic and Slavic linguistics, not least his 1836 dissertation on the disappearance of the neuter gender in Middle Latvian, and his groundbreaking 1868 study of the morphosyntax of the Old Church Slavonic verbal prefix, vz-.
Significantly less well known is Klopp’s work on the development of the mature philosophical system of Immanuel Kant, a fellow Baltic German who may have been more familiar with the languages and customs of that region than other scholars have detected. In fact, if Klopp is correct, Kant’s first-hand ethnolinguistic researches extend well beyond the Baltic. While Klopp’s 1873 book, Die geheime Sumatrareise Immanuel Kants, is not found in the Library of Congress, or even in the supposedly comprehensive online WorldCat, I have been able to locate a copy of it in at least one place: the library of the faculty of Baltistik at the University of Greifswald in Mecklenburg-Vorpommern.
Don’t miss the footnotes, which include tidbits like “Yakov Brius (also known as Jacob Bruce, 1669-1735), was a Russian statesman and scientist. Like Kant, he was of Scottish ancestry. He conducted astronomical observations from the Sukharev Tower in Moscow. It was rumoured among Muscovites that Brius practiced black magic in the tower.” And hang on to your hat!
In the comments to "Easy versus exact" (10/14/17), a discussion of the term "Hànzi 汉子" emerged as a subtheme. Since it quickly grew too large and complex to fit comfortably within the framework of the o.p., I decided to write this new post focusing on "Hàn 汉 / 漢" and some of the many collocations into which it enters.
To situate Language Log readers with some basic terms they likely already know, we may begin with Hànyǔ 汉语 ("Sinitic", lit., "Han language"), Hànyǔ Pīnyīn 汉语拼音 ("Sinitic spelling"), and Hànzì 汉字 ("Sinograph, Sinogram", i.e., "Chinese character"). All of these terms incorporate, as their initial element, the morpheme "Hàn 汉 / 漢". Where does it come from, and what does it mean?
"Hàn 汉 / 漢" is the name of a river that has its source in the mountains of the southwest part of the province of Shaanxi. It is the longest tributary of the Yangtze River, which it joins at the great city of Wuhan. The fact that Han is a river name is reflected in the water semantophore on the left side of the character that is used to write it.
The name of the river was adopted by Liu Bang (256-195 BC), the founding emperor, as the designation for his dynasty (206 BC-220 AD) — more specifically, the dynasty was named after Liu Bang's fiefdom Hànzhōng 汉中 / 漢中 (lit. "middle of the Han River"). After the Qin (221-206 BC), from which the name "China" most likely derives, the Han was the second imperial dynasty in Chinese history. Because the fame of the Han Dynasty resounded far and near, it came to be applied to the main ethnic group of China, as well as the language they spoke and the characters used to write it. Note that there could have been no Han ethnicity or nation before the Han Dynasty.
After the Han Dynasty fell, many of the dynasties that ruled in the northern part of the former empire during the following centuries were non-Sinitic peoples (proto-Mongols, proto-Turks, etc.) who actually looked down upon their Han subjects. During that period, in their mouths, "Hàn 汉 / 漢" became a derogatory term, especially in collocations such as Hàn'er 汉儿 and Hànzi 汉子, which we might think of as meaning something like "Han boy / fellow / guy". Such terms derived from "Hànrén 汉人 (漢人)" ("Han people"), which generally became a respectable designation again after the collapse of the northern dynasties. It is remarkable, however, that during the Yuan Dynasty (1271-1368), when the Mongols ruled over China, non-Sinitic peoples such as the Khitans, Koreans, and Jurchens were referred to as "Hànrén 汉人 (漢人)" ("Han people").
Here are some terms in Mandarin that are based on the Han ethnonym but refer to different types of people in various ways:
hànzi 汉子 39,300,000 ghits
1. man; fellow
3. Historically, as mentioned above, during the Northern Dynasties (386-577), hànzi 汉子 was a derogatory reference for Sinitic persons used by non-Sinitic peoples (who were rulers in the north at that time).
nánzǐhàn 男子汉 ("a real man") 11,600,000 ghits
nǚ hànzi 女汉子 ("tough girl") 7,180,000 ghits
dà nánzǐhàn 大男子汉 ("a big guy; macho man") 53,100 ghits
Comments by native speaker informants:
I know all these terms and I agree with all your translations. However, I also think that nǚ hànzi 女汉子could mean "tomboy" (girls who can do things that men can do). I once saw a translation of nǚ hànzi 女汉子as wo-man. I think that’s interesting too.
I think the term nǚ hànzi 女汉子 emerged only in the last few years in the Chinese-speaking world. So it is a bit difficult for someone like me who has been living outside for the last forty years to accurately tell its exact meaning. If it applies to young women only, then "tomboy" may not be too far off.
"What does the Chinese word '女漢子' mean?" (Quara)
"Renewal of the race / nation" (6/24/17)
Joshua A. Fogel, "New Thoughts on an Old Controversy: Shina as a Toponym for China", Sino-Platonic Papers, 229 (August, 2012), 1-25 (free pdf)
Victor H. Mair, "The Classification of Sinitic Languages: What Is 'Chinese'?, in Breaking Down the Barriers: interdisciplinary studies in Chinese linguistics and beyond (Festschrift for Alain Peyraube), pp. 735-754 (free pdf), esp. pp. 739-741.
[Thanks to Yixue Yang, Jinyi Cai, Sanping Chen, and Jing Wen]