Monday, January 30, 2017

Caching for anonymous (non-authenticated) users in Django

Hi boys and girls,

Recently I was optimizing performance on some of my django sites and needed to cache all views for anonymous users, but still render for authenticated users. Django documentation is silent on this. I've also checked out stackoverflow, but people there recommend using templates, which I don't like. So below I will give you my version of how to solve that.

Saturday, January 21, 2017

Django Haystack - how to limit number of search results

Hello friends,

Recently I needed to limit number of search results in haystack and that was a bit of a challenge. So I decided to share it with you here.

I knew how to limit search results using old style views (haystack.views.SearchView):
from haystack.views import SearchView

class MySearchView(SearchView):

    #limit maximum results
    def get_results(self):
        return self.form.search()[:100]

Friday, January 20, 2017

Fabric how to set environment variable to fix encoding

Hi there,

Today I've encountered with one weird problem and decided to share it with you.

Here is what happens. When I ssh to server manually and run python script - everything works fine. But if I try to run the same script using fabric script, which connects to the same server, then it fails. In particular, it was encoding error:

UnicodeEncodeError: 'ascii' codec can't encode character u'\xaa' in position bb: ordinal not in range(128)

Thursday, January 5, 2017

How to scrape https website with proxies

Hi all,

My last post about scraping with proxies is quite old and I decided to write a newer version of it. In particular, today I will emphasize how to scrape https website with proxies.

There are also good news about requests library. Requests has not been supporting socks proxies for quite a long time, but in 2016 there was a new release of it. So now requests fully supports both http and socks proxies.

So let's get started. Below I will show you 4 different examples of how to scrape a single https page. First, we will scrape it with requests using socks and http proxies. Second, we will do the same using urllib3 library.