Creating a compelling product vision is a difficult task, even more challenging is to sell the created vision. The balance between an innovative and sellable vision is elusive. “How to create and sell a great vision” was the topic of the sixth Product Tank Berlin meetup which took place at Fyber last week.
After the presentations, we had the honour of chatting to two great coaches of Design Thinking and innovation, Jens Otto Lange and Stefan Haas, and asked them about incorporating Design Thinking ideas into the product management and daily workflow of a tech company. Both have great experience and knowledge of agile methodologies and facilitate co-creation workshops to train teams on their thorny path of digital innovation and product creation.
So first of all, for those who have never heard of Design Thinking, could you explain how you would suggest to apply these methods in the real world, in the day-to-day operations of a company?
I know, for example, you have this room here in your company, it’s a creative space without desks, and this different environment causes another mode of working. The room is one of the three main factors of success, it’s the physical environment. You should create these varied kinds of environments in your daily business. The other important factor is changing the culture so that the corporate culture is favourable to incorporating this mode of thinking into daily routine.
Creating a room is not that hard, it’s much more about creating a mental room. We organised a creative room at our own office. Firstly, we evaluated and discussed what we wanted in the room, and what we wanted to see left out. This really created a working groove over a few days. It was really fascinating to see, when we opened it up again in the end to the rest of the office and held a presentation – well, it went horrible. They expected finished shiny prototypes, but instead were shown postits and scribble notes and flip charts. One of the team members went to the list and demonstrated it to the team, our mental room description. To construct the creative space favoring innovation you don’t need much architecture, only a sheet where you write down things that you can relate back to. It’s that easy – with simple tools from an office supply shop you can build a room for more creativity.
Another thing of course is that you have some people on the team that are trained, that can facilitate these kinds of meetings and train the others. The basic thing here is to open your mind to new options and possibilities and then close your mind again, be aware of the new options, then open, close, etc. People who are trained towards a certain mindset are needed to promote these methods within the team.
You talked a lot about mindset, is there something about organizational structures that you think needs to change or be adapted to allow for this?
Design Thinking is based on three factors: the room, the process, the team – and furthermore, a team that is cross-functional. In big companies it’s often the case that people have to get permission to work together across functions or departments and resources must be available – so one day of work together, it might have to be planned months in advance. This limits the kind of work possible because when you only work with people from your own department, you get no new perspectives or new ideas because you are always stuck in your own project.
When you want to be innovative and fast, you want a quick feedback cycle. The decision path has to be shorter and to shortcut that path you have to make changes in your organization. Evaluate how long the distance is from you to the customer who provides final feedback on the product. If that path is long, then you have to start experimenting and changing things. That’s why we work with cross-functional teams in the sprint, because there has to be a very close connection between general strategic decisions and final designs.
There should be a balance between the strategic level decision makers using tools such as “proposition canvas tool” and the people working on the product and the tangible prototype of the final product. There must be an iteration of the product with input from both levels.
When you take the sprint method, it shortcuts the horizontal making a cross functional team for the building process. What we further added to that is having the strategic dimension, which is dependant on the hierarchy in the organization. Having more people who are making strategic decisions part of the group. This way you have two feedback cycles; one going in one direction and the other in the opposite.
You mention the strategic level and the day-to-day operations. Is there a specific cadence of how to make use of these tools? How do we get from the vision that we thought up in a week long workshop to really making use of it on an ongoing basis?
There are different ways of doing this and there is still a discussion about which way is the best, but of course you should make it a sustainable part of how you work. Different companies do it in different ways. Some do these kind of sprints every week, then every four sprints they do an innovation week and work with new ideas and try to push something into the development cycle. Others try this dual-track setting where they have an innovation or a product team working two sprints ahead of the production team. All of these setups have benefits and have problems, but they do try to make a sustainable ongoing activity of innovation. We strongly recommend to these companies to implement some testing cycles which facilitate the move from more qualitative testing to data-driven testing, because now they can execute the idea in a sprint and see whether it works or not while the idea is still just a hypothesis. It’s crucial to implement this test-driven innovation cycle in your company, because without it you’ll lose your path again.
When you’re conducting these workshops, how do you guarantee or facilitate buy-in from stakeholders that were not part of the workshop? For example, those that didn’t see all of the ideas that were voiced at the workshop but didn’t make it to the final presentation?
We’re pretty sure that you don’t get buy-in unless people are really convinced. If the stakeholders are not part of the creation process, they miss the story. They have no clue what is behind this little paper prototype, they can’t relate to the experiences, they haven’t been outside of the building doing interviews. Even as part of the creation team, some people go outside to conduct the interview and some stay. Those who stay in the room do not have first-hand experience, they have to join. I know companies where every single person has to do interviews. One thing we recommend is to set up a regular user testing lab. It’s something that’s very cheap and easy to do. Every week for an hour, invite users to the office. You can either do a proper interview or show them fun usability tests. Your customers are right there, so make use of that. It has a huge impact.
If management is not part of the innovation or creation process, the workshops are a good way of making them a part of it. Maybe not five days, it can be a shorter thing. The only constraint I found here was the hierarchy and the understanding of one’s role. I found it easier to put managers into a team together, instead of mixing them up with employees of other hierarchical levels. When you mix the levels up, you can observe that some managers push their ideas quite strongly and the workshop is less productive.
What do you think is the best way of getting used to using these methodologies if you weren’t part of this school? For example, if someone says: “I have no idea about Design Thinking, but I’ve heard about it and I think it sounds great. I’d like to make use of these methodologies at my company or in my work “.
Our team counts on a mix of training and applying the tools. You should experience the methodologies in two key focus areas: The first being training and the second being problem solving. The best way to learn these tools is to experience them through application in your specific context. Otherwise you’ll have the tools in place, do a nice setup exercise, and then come back thinking, “how will I match this with what I do?” So it’s best, when people come to the workshops with their own challenges in mind, to learn how to solve these issues while they train with the tools.
We follow two paths. One is taking people out of their job context and having some tool training through simulation and games. However, then they can have a hard time transferring what they learned into their daily job. I think what’s more effective is using an action learning style. People get to solve a problem that they’re experiencing in their job, at the same time that they’re learning more about the tool.
Is there a favorite introductory tool or workshop technique that gets people excited about Design Thinking methods? And what would you suggest for people who don’t have the time or money to spend on proper training, but just want to find out if this is something for them?
Participate in a Service Design Jam! It’s a global activity, like the Product Tank, and it takes place for an entire weekend twice a year. There are all kinds of locations where you can participate in this creative workshop. The next one is at the end of the month in Berlin, and it’s a volunteer activity where anyone can join and carry out a design thinking process to develop services. The methodology is Design Thinking and it’s a really easy way to practice and get into the methodology, while having fun.
What would be something that four to five product guys in a company could implement, without knowing all the in-depth methods Design Thinking offers?
I think the easiest and most fun way is when you start prototyping. Bring in clay or Legos and spend half an hour prototyping. That moves your mind from the more rational to the productive and intuitive thinking mode and you’ll explore new space in thinking. So prototyping is the first thing I would introduce.
The basic idea is to first define your problem and then define your solution. Always think in options and then come to a decision. Another helpful method is working with tangible things – for example, drawing the problem instead of writing it down. My basic premise is to introduce intuition into the business again and not to rely only on rational deductive thinking. Rather, bring some emotion into what you do. Start incorporating things that give you a more holistic view, like drawing, observing people, or analyzing bits of data from different interviews.
On the whole observation and user interview topic: It works great for companies that are consumer-focused. Do you have tips for companies whose customers are other businesses? You can’t easily bring them into your offices, for example.
One way is indirect. Bring in salespeople who speak to customers and make them your interview partners. Another way is to just select a few customers and have a phone call or Skype call with them. If you already have an established B2B customer base, just visit them or organize user groups – many software companies do that to create a forum for discussion.
Scala has been a much talked about programming language since its adoption in such companies as Twitter, LinkedIn, Foursquare and many others. It’s based on Java Virtual Machine (JVM) and the main principle is to be a SCAlable LAnguage which means it’s good for both small and huge projects and also a tool of choice for scalable applications to accommodate growth in the world of Big Data. Scala’s syntax is concise to the point of resembling a scripting language and yet, while being quite conventional, it is a feature rich language with strong object-oriented instruments, first-class functions, a library with efficient immutable data structures, and a general preference of immutability over mutation. At the meetup of Scala User Group – Berlin Brandenburg at Fyber’s Berlin headquarters last week, we had the pleasure of welcoming Mathias Doenitz, lead developer of spray.io and an outspoken, passionate Scala-ist. Mathias presented a talk on “Reactive Streams & Akka HTTP” as part of his European tour of Scala user groups. We caught up with Mathias Doenitz during the break and asked some questions about Scala usage in general and Reactive Streams & AKKA HTTP in particular.
Could you tell us about how you started with Scala and why you created Spray? How do you feel about Scala constantly evolving in the years you used it. What did you like in the beginning, what do you like now?
I started using Scala in 2010, so that’s almost five years ago. I was in some way frustrated with using Java for many projects for many years. I felt like I wasn’t moving forward anymore as a developer. It was the same thing over and over again; you know the patterns, apply them. There wasn’t really any learning happening anymore. So I was looking into new stuff and saw Scala and really liked it, because of the conciseness of language. Of course, in the beginning in 2010 the ecosystem wasn’t as mature as it is now, the IDE support was a lot less mature and that was sometimes painful but the benefit of being so concise and expressive immediately outweighed the problems by far. After 2 or 3 months I decided that my next project was going to be completely in Scala, just to learn it, from then on it was a huge learning curve for me. I entered a completely new world with very interesting things that I hadn’t seen before, the whole functional aspect was completely new to me. I was slowly seeing all the benefits of immutability, purity and so on. What I like about Scala is that you don’t have to use all of its features right away, but instead use more and more features as you grow as a developer. So in the beginning you might use “Option” instead of “null”. Very easy to understand and with immediate benefits. Then you realize that there are many cool methods that can be found in “Option” but also in collections – so whats the common path here? You can slowly teach yourself completely different ways of programming. On the other hand, you don’t have to adopt the idiomatic Scala style, you can write the exact same code as before but in 10% of the lines. I really like that. What I also really like is the fast pace of innovation in the Scala ecosystem. That was something completely unheard of in Java. New features coming out every half a year, developed by incredibly smart and innovative people. The conferences were small, they felt like family gatherings, you could really get to know the people who would have great influence on Scala’s development. You could talk to them at a conference directly, Martin Odersky was right there with 50 other people. It just felt like a nice world of highly motivated and very talented people to be working with.
As somebody who has moved from Ruby to Scala, what are your tips for Ruby developers to get started with Scala?
The nice thing about Ruby is that it is very concise, too. You can say a lot in a few lines. With Scala, it’s the same. Also, because of the type inference, you don’t have your types in your face all the time. You can actually leave them out when you don’t need them. So that’s also something that makes it easier for Ruby guys. In the beginning you don’t need to know all the rules of where exactly you need to put in a type annotation. You just let the compiler figure out if it doesn’t do it. One main benefit when you come from a dynamically typed language like Ruby is that you can just catch so many more bugs right before your program is actually run for the first time, that takes a lot of the pressure out of your tests. Its actually quite easy for Ruby guys to move on to Scala – especially compared to, say, moving from Ruby onto Java, which must be a complete pain.
Many companies in the mobile space who process large amounts of data – for example, Twitter – made the transition from Ruby to Scala. As a company who is in this this transition right now, what advice would you give Fyber’s Developers, Architects, and Programmers who used to program with Ruby as they roll out Scala and Akka?
It depends on the goals that you want to achieve. If its mostly performance, I wouldn’t immediately do a full migration. I would try to chop up your architecture and concentrate on small chunks, and then attempt to put in a parallel version that does something similar to what your existing system already does. And then bring that up, try to integrate it, have it run in production. Once this is successful, you can apply this technique to other components. Depending on how micro-service architectured Fyber already is, you can do this transition step by step. Of course you would want to first concentrate on the components that are most critical to performance.
What editor or IDE do you use?
I have always been an IntelliJ IDEA fan. I still use it. I know JetBrains makes a Ruby IDE, so when people already use that, the transition to the Scala plugin for IntelliJ is probably not gonna be that hard. In the end I don’t really think it matters, – it’s a matter of taste and what you feel most connected to.
Akka introduced persistent storage so that messages can be sent in a more reliable way, can you tell us more about it?
One thing that is great if you use Scala is that you are on the JVM and you have full interoperability with everything in the Java ecosystem, which is huge. Whatever database you choose, there is probably going to be a binding from Java that you can use for Scala. Scala itself is a language. So there is no direct connection to any type of storage system. Akka on the other hand is a library that tries to give you everything you need in order to be able to work with highly concurrent, distributed applications. One major aspect that is becoming more and more important is Event Sourcing. If you want to go down that new way of organizing your application, then Akka and the Akka Persistence module might be something that you want to look into. Its not a relational database, it’s a completely different way of organizing your applications. If you are looking for a new kind of more message driven compatible storage solution, take a look at Akka Persistence, which is great but it would require you to adopt something like Event Sourcing, which is something you can’t roll out immediately across your whole system. But starting with smaller bits of your total architecture and realizing what benefits lie in something like Event Sourcing might help you gradually accept it and transition into it, which I am sure is going to be very exciting. So the next application that I want to build is going to be completely event sourced because I think it’s a nice approach and I like the idea of not throwing away data that I have and having a log containing everything that I have ever seen. Why should I delete stuff – hard disk space is cheap, and being able to roll back at any time to anything is great. I don’t want to overwrite anything in an immutable database.
What about fault tolerance for reactive streams? TCP has an algorithm to deal with the loss of messages or duplication of messages. Is there anything like that planned with Akka Streams? Because fault tolerance is like one of the cornerstones of Akka and it would be strange to not have it.
Absolutely, there needs to be something there. If we talk about the network and the TCP protocol, it makes sure that we won’t lose intermediate messages. We can still lose the connection, but that’s just going to end up in an error in the stream, and there is no way that you will ever have a dropped message on a TCP stream. So that’s good and inside of a JVM we can also assume that we have not lost an element. But you you are right, the question of “Fault Tolerance” in terms of “what happens if a stream stage dies?” is a good one. Currently that just means that an error is going to be propagating through the stream and terminate your stream. In the future, we will probably have something like the equivalent of an Akka Router, where you have automatic scaling of one stage across several threads or actors. If they are implemented with actors, then there is going to be supervision there, meaning crashed things shouldn’t bring down the complete stream. You can just restart that component.
We understand that when it comes to delivering ads to your users, there’s not a “one size fits all” solution. Your users can be segmented into cohorts with distinctly different behaviors and expectations, and therefore require custom targeting to optimize their ad experience.
Fyber’s new User Segmentation feature is designed to provide you with the tools you need to successfully segment and target your user base, based on a variety of usage, behavior, and demographic metrics. This feature, in conjunction with our existing ad delivery rules, allows you to not only provide your users the best-fitting ad experience, but also to maximize your monetization strategy and extract maximum value from every user.
Providing user data
Before getting started, please ensure that your Fyber SDK is up-to-date. To set up User Segmentation, you must use Fyber SDK 7.0 or newer. Through the Fyber SDK, you can provide information about the user which can be used to define your segments in the Ad Monetization Dashboard. Fyber offers a wide array of pre-defined parameters to segment your user base. To name a few: in-app purchase amount, last session time, account creation date, age, and gender. You can also define custom parameters in the form of a key:value pair.
For more information on how to use the Fyber SDK to provide user data, please see our documentation.
Defining user segments
The next step is to identify your different user groups and the parameters that define them. As previously mentioned, you can select from Fyber’s pre-defined parameters or set up custom ones.
Once you have decided how you want to define your user segments, you can set them up through Fyber’s Ad Monetization Dashboard. For example, let’s say that you would like to differentiate the ad experience for paying vs. non-paying users. You can simply set up two different segments that are defined by the parameter, “IAP amount”. This will allow you to refine your strategy and monetize these two user groups in a distinctly different way.
You can create as many user segments as you need, and if you don’t find the criteria you require among Fyber’s pre-defined parameters, you can create your own parameters to fit your requirements.
Customizing your user experience
Using Fyber’s segmentation feature in conjunction with ad delivery rules allows you to easily determine how you want to monetize your different user groups and customize their ad experience accordingly.
Let’s take again the example of paying vs. non-paying users: You could choose to create a rule that limits the frequency of ads shown to paying users, or choose to eliminate ads to this group completely. On the other hand, let’s assume that you want to boost the number of ads shown to non-paying users in order to drive more revenue from ad monetization. You could set up a frequency rule that applies a max number of four ads per hour, paced 10 minutes apart. As with any ad delivery rule, you have the option of applying it globally or just to a particular country, allowing you to differentiate the user experience on a regional level.
To implement this feature, please ensure that you are using Fyber SDK 7.0 or newer. You can download our latest SDK through Fyber’s Developer Portal. For more detailed information, including how to set up segments and customize the user experience, please read our user guides for iOS and Android. If you’re ready to get started, please contact your Account Manager.
Upper Right: The event drew a packed house.
Lower Right: Britta joined Fyber’s systems engineer,
Robert Gardam, for post-talk discussions.
Many companies have to deal with large amounts of requests and data, specifically in a high-traffic mobile environment, and questions arise on how to sort and make sense of this data in as close to real time as possible. Open source tools, such as Elasticsearch, significantly help deal with this challenge. Fyber was excited to host a meetup of the Elasticsearch User Group Berlin organised by @asquera and have the very knowledgeable Britta Weber give a talk entitled “Making sense of your logs with the ELK stack”. She spoke about Elasticsearch, Logstash, and Kibana in her presentation and provided us with lots of great advice and examples of practical applications.
We caught up with Britta after the talk to explore a few questions on our minds, such as: How can Elasticsearch, Logstash, and Kibana help your work with big data? What are some best practices for an administrator working with Elasticsearch? The ideas and opinions expressed in this interview are Britta’s alone and not Elasticsearch’s.
What determines a log retention period? At Fyber, we work with large quantities of data and split our logs by the hour. Each hour amounts to about 60-70GB. We index a lot, but what do you suggest?
The length of the retention period depends on company policy – and even laws – so it’s a decision you have make for yourself. Sometimes it makes sense with this amount of data to index on a strong machine and move the index to a less performant one once the indexing is done. If you need to keep the index around for a long time, but not necessarily search on it, you can close the index and store it somewhere. This will reduce the need to keep all indices online. Your company policy and hardware constraints will determine what’s best for your company or project.
How do you size an Elasticsearch cluster for logs?
This goes along the lines of how you manage indices, and it’s a problem sometimes. First you need to know what you need to support: Do you need more indexing speed or will you be serving lots of queries? What are the constraints? For example, how long is your query allowed to take? Should it last a max of 10 milliseconds or can it be a minute? For example, some people just want to check summary statistics every morning and then it’s okay to have a query run for an hour.
How big is a shard allowed to be for querying?
You need to perform tests to see what works best for your situation. There’s a technique you can use to determine this: First, start off with one index and one shard and start indexing into it. Then you measure your query performance. Eventually it will reach the point where query latency will exceed what you are willing to accept – this is the size that the shard is allowed to take. Then take a look at how many lines/documents you expect per index, and you’ll know how many shards you need per index. Just remember that you can’t split indices once they are created. But the good thing is you can always add another index and start indexing into the new one.
If the query comes in and the data is only on one shard, it will only ever run on this shard. Parallelization is only achieved by splitting the index into shards. If you have an extremely large number of parallel queries, it may be worth increasing the number of replicas and ad hardware.The best performance is achieved if you have one shard per node, but that’s not always necessary.
How should you manage upgrades to your elasticsearch cluster?
The guides for Elasticsearch tell you how to do this.
How much time should you spend configuring index mappings?
A lot (laughs). If you have a lot of data, then it makes sense to think about which fields you actually need to analyze. If you are using Logstash, you may not need the “all” field or the “analyzed” field – if you can get rid of the “analyzed” field, super! The mapping also plays a huge role in the quality of the search results, so experimenting with different settings is often inevitable.
Is there a good reason to use aliases in the ELK stack? (indexing and searching)
Yes, you should always use aliases, always. The reason is the simplification of process. For example, if you want to reindex, you can switch alias easily as it’s an atomic operation. You can switch it quickly without needing to change everything that points to the indices.
When should you separate the roles in your cluster (i.e., master from data nodes)?
In general, if you have a big cluster – for example, 10 nodes – it makes sense to create dedicated master nodes. The reason is that your master node has to be quick, it has to be up and running, and the more work the master does, the more unstable your whole cluster could be. So having the master relieved of all the data node stuff, like indexing, makes sense. Always make sure the master does not run out of memory. The master node can be lightweight, but it’s important to make sure they can handle the size of the cluster state. This can grow when you have lots of indices with lots of mappings. This will be shared throughout the cluster and if the master’s isn’t able to hold the cluster state, it will run out of memory.
What key metrics should an Elasticsearch administrator look at when determining the health of the cluster?
Heap, always look at the heap. Make sure you don’t see the bad garbage collection pattern, the one that looks like a saw-tooth – that’s a bad sign. Give your node more heap, but not more than 32GB. There’s a webinar by my colleagues called “Pre-flight checklist” that provides a lot of guidance on this. Check it out.
What should an administrator do when the cluster comes up red?
Buy Elasticsearch support (laughs). When the cluster comes up red, look at the health of the cluster first and then at how many shards are active, how many are initializing, and how many are on the side (especially after a reboot). If some are initializing, wait a minute, be patient, and don’t panic. Look at the heap and the logs – the logs will tell you a lot. From there on, it’s tricky to give general advice since the best course of action depends on what you saw.
How many logs could a logstash stash if a logstash could stash logs?
(Laughs) A cagillion!
Many thanks to Britta for taking the time to sit down with us, and to all those who made it out for the event! If you missed the talk, the presentation slides are available for download here.
The first Ruby User Group Berlin meetup of 2015 took place on the 8th of January at Fyber. We are always happy to welcome our Ruby developer friends and aficionados but we were a tad worried that the cold weather and the heavy Berlin winter rain will deter many from attending. Who wants to endure these subzero conditions, right? Wrong! Even those Ruby devs that travelled from their Christmas holiday earlier that day, made it out for an evening of Ruby talks, beers and pizza. It was great to see a full house and some great topics to discuss.
Bodo Tasche gave the guests a fantastic Introduction to Statemachines, you can find the slides to his talk here. By the way, he also does tech podcasts for bitsofberlin.org, if you haven’t checked this site out yet, do so immediately, you are missing out!
Mattias Günther gave a fascinating talk entitled “Lord of the Code Smells for Padrino”, you can find the slides here. He talked about some great tools that will help you discover those deeper problems in the system and help you to avoid smells that lower the quality of your project. Very useful, specifically in the dynamic start-up scene where developers on the team change very often during your application’s lifespan.
2015 is really off to a great start, for RUG::B at least. We at Fyber absolutely can’t wait to see what other great meetups and talks they will organize this year. Join us for the next meetup at Fyber, this time we are having the Scala Group meetup on Thursday 12th of February with a talk by Mathias Doenitz on Reactive Streams & Akka HTTP. Fyber’s doors are always open to you guys!
Are you looking for the perfect opportunity to join Berlin’s vibrant developer community and put your Ruby skills to the test? Fyber’s seven dedicated development teams offer the challenge you are looking for. Check out the job openings on our new careers page.
2014 is coming to a close – and oh what a year it’s been! From our rebranding journey back in July, to our exciting acquisition news in October, to the many product and mediation partnership announcements in between, this year has been a landmark for us. But the truth of the matter is, we couldn’t have done it without you. Without the support and confidence of our developer and advertiser partners, we wouldn’t have experienced the explosive growth that we’ve achieved over the past five years, and we wouldn’t be sitting where we are today. So we want to take the time this holiday season to thank you for joining us on this exciting ride – we can’t wait to see where 2015 takes us!
In our latest case study, we take a look at how Social Point – one of Europe’s leading game developers with more than 50 million monthly active users – successfully streamlined their ad monetization strategy, while pinpointing key opportunities to maximize revenue. In the study, we examine how Social Point:
- Achieved average weekly eCPMs between 45-82% higher than publishers who only integrated a single ad source.
- Determined when and how to tweak the amount of in-game currency rewarded for completing offers.
- Tackled key ad monetization challenges, such as non-standardized reporting of KPIs from various demand sources and time-consuming integrations.
Starting February 1st, 2015, Apple is requiring that all new app submissions to the App Store be developed for iOS 8, using Xcode 6 and including 64-bit support. Beginning June 1, 2015, all updates to existing apps will have to follow the same requirements. To address any questions you may have on this topic, we’ve prepared a detailed FAQ to help you and your team.
As a developer, what do these requirements mean for me?
It means that your app, as well as all SDKs and libraries included in your app, have to comply with the new requirements.
So to avoid rejection of any new apps submitted on or after February 1st, please ensure that your app is built using the iOS 8 SDK and that it fully supports the 64-bit architecture.
If you are looking to convert your app to a 64-bit binary, you can find more information regarding this transition through Apple’s iOS developer library.
What about Fyber’s SDK & the adapters for mediated ad networks?
The Fyber SDK (6.5 or newer) and our mediated ad network adapters already include 64-bit support and are fully compatible with iOS 8 SDK and Xcode 6.
If you are running a version older than 6.5, please download and integrate Fyber’s latest SDK.
What about my mediated ad networks?
Since you are also using the SDKs of your mediated ad networks, in addition to Fyber’s SDK, we recommend that you include only those ad networks whose SDKs are fully compatible with the new requirements. If you are running a version older than the minimum compatible version, please download and integrate that ad network’s latest SDK.
Below is a comprehensive list, based on ad format, regarding the 64-bit compatibility of Fyber’s partner ad network SDKs. Please check with your partners regarding the minimum version required for compatibility.
What if my app is built in Unity?
Unity will be supporting 64-bit in both the existing 4.6 version, as well as in the new Unity 5, which is currently still in beta phase. If you are using an older version, you will need to update.
UPDATE 12/1/2015: According to Unity’s blog, the first public version of Unity 4.6 with 64-bit support should be available by the end of January 2015. If you are using an older version (Unity 4.x/3.x), please refer to the article for more information and advice on upgrade strategies.
For any other questions, please don’t hesitate to reach out to your Account Manager. All of us at Fyber wish you a happy and safe holiday!
2014 has been a great year at Fyber – from the unveiling of our new developer dashboard to the announcement of our new corporate brand, as well as many new partnerships that have helped to strengthen our Fyber network. So it’s no wonder that we ended the year with a bang, with two unforgettable Christmas parties in our home cities of Berlin and San Francisco.
The Berlin Fyber team celebrated by traveling back to the Golden Twenties – think Marlene Dietrich, think cabaret, opulence, and a truly decadent celebration. Our fabulous ladies and stylish gentlemen danced the night away at the Tangoloft of Fabrik23. Delicious food and drinks were complemented by piano music, followed by electro swing to finish off the night.
Fyber’s SF office, on the other hand, time-traveled into the past of 1950s America. The team bowled the night away at Mission Bowling in greaser and sock-hop attire, while sipping on speciality holiday cocktails and listening to 50s holiday music.
The two parties were a fantastic way for both of our global teams to to celebrate the holidays and wrap-up a very eventful 2014! 2015, we’re ready for you!
On the heels of our recent roll-out of Ad Control for Interstitials, we’re excited to announce that we’re introducing the same set of features to help you manage your ad monetization strategy for Rewarded Video. You can now easily prioritize your mediated ad networks with our Demand Source Priority rules and manage your users’ ad monetization experience by setting up impression caps and pacing rules directly from the Dashboard, without the need to write a single line of code.
Fyber’s new Demand Priority feature allows you to stay in control of your direct deals, while continuing to benefit from Fyber’s Predictive Algorithm to optimize the rest of your demand sources. Want to top-rank a video ad network because you have a direct deal with that specific partner? Just drag and drop the mediated ad networks into the order you’d like to prioritize them and indicate whether you’d like to apply these settings to a specific country or globally.
For example, let’s say you have a deal with Ad Networks A & B in the US and UK. You can easily top rank them just for those countries, which means that in those regions they will be served before all other demand sources. Then, after the ads from your top-choice ad networks are called, you could either rank a second choice ad network, or group together multiple ad networks and leave it up to Fyber’s Predictive Algorithm to serve the top-paying ads.
Ad Delivery Rules
Show your users the right ads at the right time with Fyber’s new Frequency-Capping and Pacing features. The impression cap allows you to define the maximum number of impressions allowed per day for each user, while the pacing rule lets you decide exactly how often your users are exposed to ads. These rules can be applied to all regions or defined per country.
For example, let’s say your app has demonstrated strong in-app purchase revenues in Germany, so you want to reduce your users’ exposure to ads. You can set up a delivery rule in this specific country that defines a max of 10 impressions per day and a pacing rule that allows ads to be shown every 50 minutes, while the rest of the region is served with 20 impressions per day and a pacing of every 15 minutes.
Ready to get started?
These features are readily available through the Ad Monetization Dashboard for your iOS applications, but to take advantage of them on Android, please ensure that your app is using Fyber SDK 6.5 or a newer version (download our latest SDK). For any additional questions, please contact your Account Manager.